A Week as a Man, A Conversation With LinkedIn, and Some Data We Need to Talk About.
A closer look at the gap between “neutral” algorithms and gendered outcomes.
Disclaimer: I’m sitting at my keyboard feeling so very nervous about posting this, because I’m very aware of the tiny, wobbly tightrope I’m on here. I don’t think there’s a secret “hide the women” switch buried somewhere at LinkedIn HQ (and never did- so thanks, mansplainers, but you can move on now), but I also can’t pretend the patterns I’m seeing make sense, and living in that weird middle space feels like the perfect recipe for someone to misread my intentions. So I’m doing my best to take the data seriously without inventing villains, acknowledge the complexity without letting everyone off the hook, and stay open to the fact that something real is happening even if no one is explaining it cleanly yet.
And for context, I’ve also been on a two-year-long binge of books and podcasts about how corporations go sideways. It’s almost never a dramatic conspiracy, it’s tiny choices to prioritize short term profit over all else. Basically lots of “we’ll look into it later” moments, or “we don’t have budget for that” or “technically it’s not ACTUALLY illegal…” decisions that pile up until the outcome is basically indistinguishable from a group of mustache-twirling villains plotting in a boardroom. So that’s the headspace I’m in as I’m looking at all this.
I also know this is bigger than gender. Visibility shifts across race, sexuality, disability, and more. I’m focusing on the part I’ve personally experienced, with the data and conversations I’ve had, but any reader should know that if we’re talking about bias, the broader picture is so much wider than me.
So without further ado…
When my post about “LinkedIn liking me better as a man” went viral, a representative of LinkedIn reached out and asked to talk, and I obviously said yes.
I’m not the official spokesperson for “gender and the algorithm,” I’m a mental health professional. But my experiment has clearly pulled women further into a conversation that affects their income, visibility, and safety, I feel some responsibility to carry it through.
So here is what I heard, what I still see, and what I think we can test together.
What LinkedIn actually said
I spoke with Laura Lorenzetti Soper, who leads the creator management Global Editorial team at LinkedIn.
Her main points, in plain language:
LinkedIn does not let gender explicitly impact the feed. In her words, “we do not use gender in the ranking signal.”
The recent drop in reach is not just happening to women. They are seeing lower engagement across creators of different genders. (post publication edit: I got clarification from LinkedIn that the more accurate way to put this is “Overall content creation on LinkedIn is up. This means there’s more competition for attention in the feed, which is happening regardless of gender.”)
Large language models are used, mainly for classification and to keep clearly harmful content out of the feed, but not to rank posts based on tone or word choice.
The actual ranking system is a collection of AI models that use things like early engagement and relevance to decide what gets shown where. (edit: Linkedin clarified: “The actual ranking system is a collection of AI models that use hundreds of signals, including relevance and engagement as well as signals from your network and activity.”)
They run fairness checks before they ship features. Internally, they test “equal opportunity for equally qualified members,” looking at outcomes across gender and, in the U.S., race.
I also asked whether they use business-coded words or “agentic” (aka male) tone as positive signals. I got a clear “no.” According to Laura, the models that rank feed are not scanning for words like “scale” or “pipeline” and giving extra points.
From LinkedIn’s point of view, what moved my numbers was not the gender toggle. It was the format of those ChatGPT-generated posts: a strong hook, clear structure, tagged people, early engagement, and a lot of conversation.
And to be fair, those are all real factors that changed during the experiment (even if they raise questions about why my previously-well-performing self-written content is now less valuable than Chat GPT rewrites…)
We’re not imagining the drop in reach
LinkedIn knows engagement has shifted.
On the call, Laura told me they’ve “heard this regardless of gender,” meaning their internal teams are seeing lower (post publication clarification from Linkedin: “changes in engagement- It’s not down for all creators. With more competition amid more creation, there are shifts.”) engagement on creator posts across the board. She shared a few things they believe are contributing:
“Overall creation in the ecosystem has gone up.” More posts and more competition, sharing impressions spread across more content.
Early engagement carries a lot of weight. She showed me my own numbers and pointed out how posts that spike on day one continue to climb, and posts that don’t catch early stall quickly. Who happens to be online in the first hour matters more than most of us want it to.
Timing and cadence. In her words: “Fridays tend to be dead,” Mondays pick up, Tuesday and Wednesday are strong, and weekends can work because there are fewer posts but also fewer viewers. Their team is currently encouraging creators to post three to four times a week and give posts room to grow instead of stacking them.
Video changes. She noted that when they shifted their video approach, “a lot of views were lost from video.” So creators who leaned heavily on video saw a real impact there.
(post-publication clarification from LinkedIn: “Apologies for not being more clear on this -- this has more to do with how we’ve been evolving our video product, which has been focused in the US. Video engagement has been going up locally, you can see those stats here. Video strategy has shifted this year, focusing on select features in the U.S. That means there’s been fluctuations in video performance for some creators.”)
Laura was honest that they don’t have an immediate solution and that it’s something they are “actively thinking and looking at” so people feel like they’re actually reaching their audience.
So if you’re a woman who has watched your numbers tank and wondered whether you somehow fell off overnight, you’re not imagining it. Something in the feed changed. Linkedin sees it, even if they can’t explain what’s happening.
LLMs are Gender Biased- so is Linkedin, too?
I asked Laura if and where large language models actually show up in all of this, given that LLMs are proven to be gender-biased. (In a follow up, she noted that 360Brew has not been implemented in this manner- “we do not currently use that in any of our Feed systems.”)
Laura said that the algorithm that ranks our posts in the feed is not a big language model that reads our tone and slots us on the feed. Feed ranking uses other, “non LLM” AI models that watch signals like early engagement, relevance, and network connections to boost posts. In her words, the LLMs are mainly involved in creating classifiers, and those classifiers are used heavily in trust and safety.
***
Classification means that a piece of content comes in, and the system has to answer “what is this” before it can answer “who might care about it.” Old-school libraries did that with a Dewey Decimal number. Someone read the title and summary, decided “this belongs with psychology,” and gave it a code so it could sit next to similar books on the shelf.
Modern AI does a similar thing with posts, comments, and profiles. It turns each item into a list of numbers that roughly captures its meaning, then stores that list in something like a giant coordinates system called a “vector database”. Things that live near each other in that space are “similar” in the model’s eyes.
A vector database is basically the shelf for those coordinates. It is a place where the system can say, “Find me other things that feel like this one” without rereading the entire internet. That is how you get from “raw text” to “this looks like leadership content” or “this looks like career advice” or “this looks like a post about trauma in the workplace.”
Those groupings are powerful, and they are not neutral. If you teach a classifier what “leadership” content looks like using mostly posts written by a certain demographic in a certain voice, the “leadership” shelf starts to tilt in that direction. Even if gender does not exist as a field in the model, the pattern itself can skew toward people who already sound like the default.
And btw, everything between the asterisks wasn’t from Laura, but from previous work I’ve done with Saas AI companies- I wrote a piece here for Eleos that talks about these concepts in behavioral health.
***
Understanding these concepts is important because it shows how classification and language models work together in many products people already use, and how much power can potentially sit in the choice of categories.
(And Linkedin doesn’t deny there is bias in AI- you can read what they say about it here.)
Trust and safety is a different job entirely. If classification is the librarian, trust and safety is the bouncer. Once a post is categorized, a separate set of classifiers decides whether it crosses certain lines. That is where LLM-based systems are watching for what Laura called “red” content: hate speech, targeted harassment, graphic material, things that clearly break terms of service. Content that triggers those systems can be removed, limited, or sent to a human for review.
I do believe the intent there is protection, not punishment. At the same time, anyone who has spent time with public-facing AI tools has seen how seemingly innocent questions get denied for going against “the terms”. A clinical description of sexual trauma, a post about reproductive health, or a professional discussion of self-harm protocols can look “unsafe” to a blunt classifier even when the whole point is education and care. When women and people in trauma-heavy fields say “my posts that use the word “vagina” never get seen,” this is one of the places my mind goes.
On our call, Laura repeatedly said that LinkedIn runs fairness checks before shipping features and measures “equal opportunity for equally qualified members” across gender, and in the U.S. across race. I am glad they do that, but from the outside, a lot of this still feels opaque.
Classification and trust and safety are the gateway into the feed and obviously shape what people see, but most users have no idea where those lines are. I asked for more education around how categories are defined, how trust-and-safety models handle sensitive but legitimate language, and how often those underlying systems are audited for the patterns (as well as whether that data is ever shared.) She shared some existing resources (see here and here), but I hope that we’ll see more from Linkedin in the coming weeks.
Equal treatment is not the same thing as equity
I believe that LinkedIn’s internal teams care about fairness, as much as corporate teams are “allowed” to care. I also believe they are measuring something real when they talk about “equal opportunity for equally qualified members.” But equal treatment inside the LinkedIn Algorithm machine is not the same as equity in the real world.
If the system says, “I will distribute posts based on early engagement,” and the culture says, “I take men more seriously as business voices,” then women lose long before the algorithm shows up. People probably engage more with content that sounds like what they already associate with authority. Those engagements become the input, and the machine amplifies the pattern.
From the inside, that can look like neutrality. From the outside, as a woman writing in a field built on relational language, it does not feel neutral.
Mental health makes this even messier
My industry makes my experience and experiment an especially interesting case, if I do say so myself.
Mental health is a field that centers around attunement, nuance, and relational language. Those are not soft side skills. Mental health is probably the closest you can get to “that’s a personal problem” language in an “we’re on the job” field.
When I write as myself, or ghostwrite for leaders who are trying to earn the trust of clinicians, I have to use language that reflects the ethics and relational values of the field. That language is often coded as “feminine” because it’s collaborative, careful, and less aggressive.
Unfortunately, posts that use that tone often get less reach than posts that sound like they were written by someone pitching a growth-hacking webinar, even when the content is more grounded, more useful, and more aligned with the audience’s actual needs.
LinkedIn is right that the algorithm rewards engagement. The trouble is that in a gendered culture, engagement is not neutral either. When platforms follow the lead of large groups of humans, they also follow our bias.
So now we have:
A system that treats engagement as a neutral signal
A culture that gives men and “male” voice styles more credit in business contexts
Entire fields, like mental health, that rely on a different kind of language and get penalized for using it because the larger population doesn’t resonate with it in a professional setting.
No one team at LinkedIn can fix all of that, but platforms do sit at a leverage point.
What responsibility do platforms have?
I am not expecting LinkedIn, or any company, to undo thousands of years of gender bias through one feed-ranking model.
I do think they have some clear responsibilities once they know how their systems behave:
Radical transparency where possible. They already publish some of their fairness work. Sharing more about what they test, what they find, and where they fall short would help creators ground feedback in reality instead of speculation.
Measurement beyond “equal treatment.” If you only ever ask, “Are we treating everyone the same,” you will never catch the places where “the same” keeps producing unequal outcomes. That is not only a gender issue. It affects race, disability, class, and more.
Space for relational industries. If certain sectors require different communication norms in order to function ethically, there should be room in the product to accommodate that. Not special favors, more like “protected lanes” where relational content is not punished by bystanders who don’t want to see talk about trauma or emotions on a “professional” platform.
To be clear, LinkedIn did not promise any of this on our call. These are my reflections, not their commitments.
Which is why I want to test something we can do without waiting for them.
A proposal: Women’s Visibility Week
If the drop in engagement is not based on gender (or gendered traits in communication styles), but on engagement, we should be able to see it in action.
So here is the idea: what would happen if women deliberately flooded the system with engagement on women’s posts for a week?
If LinkedIn is right, and the algorithm is mostly responding to engagement patterns without using gender as a direct or indirect signal, then a coordinated wave of likes, comments, and shares on women’s content should change what the algorithm sees, at least temporarily. It would stress test the explanation that the issue is competition and early engagement, NOT systematic suppression of women.
So here is my proposal.
Women’s Visibility Week on LinkedIn
Tentative dates: the week of December 9–15, which cover two milestones in women’s rights history: the first appearance of the word “suffragette” in print and the introduction of the Equal Rights Amendment several years later.
For one week, anyone who wants to participate commits to:
Intentionally engaging with posts from women, especially those in your own field, especially when tagged, especially in the first hour of posting.
Prioritizing content that uses relational, collaborative, or “softer” language, not just the posts that sound like every other business thread.
Commenting in substantive ways so the signal is stronger than a simple like.
Sharing at least a few posts from women whose reach has dipped, with a note about why their work matters.
Optional: women creators can track impressions before, during, and after that week to see if there is a noticeable shift.
If LinkedIn’s systems are as neutral to gender as they say, and if what is holding women’s posts back right now is a crowded ecosystem plus early engagement dynamics, then this kind of coordinated human behavior should lift women’s reach in a visible way. If it does not, that is useful data too.
Where I land, for now
I didn’t need a call with LinkedIn to tell me there is no big red “make women disappear” button in the backend. I already knew that. I have a master’s degree. I work with software companies. I have spent the past three years interviewing AI experts and translating their language for a wider audience. I understand the basics of how these systems are built.
(Shout out to all the men who showed up in my comments and my inbox to patronizingly walk me through that point , which was its own kind of data. If you cannot get past “there is no secret suppress women switch” to consider that bias can move through more subtle channels, you are not ready for this conversation. Go learn more about how systems inherit cultural patterns before you jump into someone else’s mentions.)
But I do believe we are watching a familiar story play out in a new format.
Women and relational fields, like mental health, are being asked to adapt to a system that, either culturally or algorithmically, is calibrated on a narrow slice of what “business” sounds like. Even if the machine is neutral to gender as a variable, it is not neutral to patterns that benefit people who already sound like the default.
I care about this because therapists, clients, and communities pay the price when relational language is treated as less professional. I also care about not turning a complex problem into a simple villain story, because there IS a lot of complexity, and my daughter deserves a world where I’m willing to grapple with that complexity to make solutions that work for her.
One last note
I want to say something clearly before I become a spokesperson for gender equity on LinkedIn:
This is not my lane.
My work is mental health, and really, more than that, eudaimonia/human flourishing. The only reason I ran this experiment at all was because something in the system started interfering with the work I care so much about.
I’m going to keep advocating where equity intersects with mental health, because the way people are treated absolutely shapes their wellbeing. But I am not shifting my entire focus to gender bias in tech. I know there are women whose calling is this part of the work. They’ve been studying it, organizing around it, and pushing for change long before I ever toggled anything in my settings. (Shout out to Jane Evans, Cindy Gallop, Samantha Katz- I know I’m missing so many other women, and I’m sorry for that!)
If Women’s Visibility Week becomes a real thing, I’m all in. I’ll back it, I’ll amplify it, and I’ll support anyone who wants to lead it. But this doesn’t need to become my project. If this resonates with you and if this is your lane, please reach out. I’d be glad to hand this baton to people who feel compelled to carry it further.
I’ll keep doing the work I’m called to do. And I’ll cheer for the women doing the work they’re called to do, too.







Thank you Megan for sharing your research and conversations! You got my attention. Lots to think about and reflect on .. how technology influences and intersects with gender. I love your idea about Women’s Visibility Week. Look forward to hearing more! Maureen Kritzer-Lange, MSW, LCSW
Thank you for sharing this, Megan. It's one of the most comprehensive pieces I've read on this topic. As someone who writes about moral injury and institutional betrayal, the biases are definitely disheartening, but I find your analysis to be interesting. I think the intersectional aspects are really important - especially when it comes to race because I suspect women of the global majority may still not get the engagement others are able to. I'm looking forward to participating in your proposed experiment to see what happens.