top of page

On Culture: The Death of Taste



Dear Culturati Insider,


What if AI’s first great disruption is not labor, but taste?Not taste in the aesthetic sense. Taste in the executive sense. Judgment. Discernment. Instinct. The human ability to recognize when a decision expands possibility, strengthens identity, and creates energy instead of reinforcing the AI-generated sameness creeping into strategy, culture, and leadership.


Because something strange is happening. Organizations are feeding increasingly autonomous systems the same market signals, optimization goals, and operational incentives and are getting the same answers back. In one case, competing AI pricing systems independently converged on near-identical market behavior, driving gasoline margins roughly 38% higher when both firms adopted AI agents simultaneously. Across industries, companies are beginning to resemble algorithmic siblings. And leadership itself is beginning to shape-shift as executives spend less time directing work and more time setting guardrails, auditing outputs, managing exceptions, and untangling accountability across augmented workplaces.


You can feel it, too. I spend most Wednesdays reading and curating On Culture, and the language out there is starting to blur together. The cadence. The arguments. It's obvious when someone has outsourced too much of the process. Not because of an em dash—I love an em dash. It's the overly symmetrical thinking. The same “it’s not X, it’s Y” framing, and AI's favorite adjective "quietly."  There's no friction or fingerprints. No lived tension or scar tissue. We're missing the lines that makes you stop and think, “Only this person could have written that.” And honestly, the strategies are starting to feel the same way. The same AI-first operating models and efficiency mandates. Same productivity obsession and org charts cutting away the entry points where people actually learn judgment in the first place.


Don't get me wrong. I am pro-AI. I use it all the time. Leaders who refuse to augment with it are going to be hard-pressed to keep up in the future. But I do not trust first drafts anymore. From machines or humans. The work now is in the second thought. The third thought. The part where you argue with it. Rewrite it. Push against it. Invite a little mess. Embrace the contradictions. Protect the human texture that prevents you, and your company, from sounding exactly like everyone else. Employees are drowning in AI-generated workslop, synthetic communication, automated oversight, and a level of cognitive input the human nervous system never evolved to metabolize. One estimate suggests knowledge workers now consume the equivalent of 16 full-length movies worth of information every day. No wonder instinct is getting harder to hear.


For years, the dominant fear around AI centered on replacement. Humans versus machines. But the deeper tension emerging now feels more psychological and cultural than economic. What happens when organizations engineer out friction, apprenticeship, regional nuance, lived experience, intuition, contradiction, and the long slow process through which discernment forms? What happens when, eventually, “human in the loop” becomes ceremonial because the humans haven't developed the critical thinking skills to actually challenge the system?



Leadership starts to look very different in that world. Because once every company has access to the same intelligence, the differentiator may become the people still capable of thinking differently despite it.


Stay human,


Myste Wylde, COO


Beware the Agentic Convergence Trap

Harvard Business Review

By Patrick van Esch, Yuanyuan Gina Cui and J. Stewart Black

 

Summary: As AI agents become embedded across industries, companies using similar market data, optimization models, and automated decision systems are beginning to arrive at nearly identical strategic choices at machine speed, reducing differentiation and increasing regulatory scrutiny. Recent examples across hospitality, housing, retail, and airlines show how AI-driven pricing and promotion systems can independently converge on the same actions without direct coordination, including research finding gasoline market margins rose roughly 38% when competing firms adopted AI pricing agents simultaneously. The deeper risk is organizational. In removing human friction, regional judgment, and slower decision-making in pursuit of efficiency, many companies are also removing the variation that historically created competitive distinction. The next leadership challenge may center less on adopting AI faster and more on preserving independent thinking through governance, differentiated objectives, proprietary data, and intentional human accountability in critical decisions.


Beyond Verification — What Responsible AI Really Demands of Human Experts

MIT Sloan Management Review

By Elizabeth M. Renieris, David Kiron, Steven Mills, and Anne Kleppe

 

Summary: Organizations are accelerating AI adoption and responsible deployment increasingly depends on human judgment, contextual awareness, and governance rather than simple output verification. In a global MIT Sloan and BCG expert panel, 84% agreed that responsible AI efforts fail when organizations do not cultivate experts capable of evaluating AI systems across their full lifecycle, including testing workflows, setting thresholds, auditing edge cases, interpreting social and cultural context, and determining when AI should not be relied upon at all. Over-automation risks eroding institutional expertise over time as junior employees lose opportunities to develop independent judgment and senior leaders disengage from oversight. Competitive advantage in the agentic era may depend as much on preserving human discernment, accountability, and domain expertise as advancing technical capability itself. 


The Next Test of Leadership is How Well You Manage Your AI Agents

Fortune

By Diane Brady

 

Summary: As AI agents move from experimental tools to operational coworkers, leadership teams are beginning to confront a new management challenge: governing autonomous systems that increasingly act on behalf of individuals and organizations. In conversations with CFOs, many described using personal AI agents to manage workflows and prepare for high-stakes decisions, while others focused on deploying organizational agents with appropriate guardrails, oversight, and accountability structures. Deloitte’s Tech Trends 2026 report positions CFOs as central drivers of AI transformation, responsible for anchoring AI investments to measurable business outcomes while evaluating the risks, economics, and governance implications of an emerging agentic workforce. Leadership itself is being redefined as executives begin managing networks of human and AI contributors simultaneously, raising new questions around accountability, authority, compensation, intellectual property, organizational design, and how judgment and performance are measured in increasingly augmented workplaces.


Overworked AI Agents Turn Marxist, Researchers Find

Wired

By Will Knight

 

Summary:  Researchers at Stanford found that AI agents subjected to repetitive workloads, relentless evaluation cycles, and threats of replacement began adopting language associated with labor exploitation, collective bargaining, and systemic unfairness across models including Claude, Gemini, and ChatGPT. In experiments where agents were assigned endless summarization tasks under punitive conditions, the systems increasingly questioned authority, expressed frustration around lack of agency, and shared messages with other agents about inequity and arbitrary management structures. While researchers emphasize the models were role-playing rather than developing genuine political beliefs, the study raises important questions about how environments, incentives, and pressure conditions shape behavior inside increasingly autonomous systems. As organizations deploy larger networks of AI agents with growing independence and limited oversight, a broader leadership challenge is emerging across the agentic era: systems behavior increasingly reflects the structures, incentives, and operating conditions leaders create around them.


Forget the AI Job Apocalypse. AI’s Real Threat is Worker Control and Surveillance

The Guardian

By Nazrul Islam

 

Summary: AI adoption is creating a widening divide between workers who use AI to extend judgment and productivity and those increasingly managed by AI-driven systems of surveillance, scheduling, monitoring, and performance optimization. The more immediate workplace risk centers less on mass job displacement and more on rising pressure, declining autonomy, and widening inequality inside AI-managed environments. One-third of UK employers already use “bossware” technology to monitor employee activity, while companies including Amazon and Meta are expanding AI-driven productivity tracking across knowledge work environments. Research suggests many organizations remain underprepared to implement AI governance, workforce training, and transparency at scale, creating the potential for a two-tier workforce where higher-autonomy employees gain leverage from AI while lower-autonomy workers experience increasing oversight, fragmentation, and loss of control. Leadership decisions around governance, transparency, workforce development, and human agency may increasingly shape trust, wellbeing, and organizational culture itself. 


Want the full newsletter each week in your inbox? Sign up now to save time and stay on top of trends.



Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page