Researching how AI reshapes our life at work.
An independent research initiative that explores the human side of AI at work and advances human-centered organizational change.
🎉 Measuring the Human Experience of AI in the Workplace (Preprint) is published!
📃 View the 1-page sector summary for key differences across knowledge, service, and civic roles.
We help organizations design ethical, people first approaches to AI adoption and change management.
The future of work is shaped by people, not technology. We prioritize empathy, transparency, and shared growth while pursuing practical results.
Led by Eric P. Rhodes, Future of Work Lab studies how AI and automation affect worker experience, including stress, autonomy, surveillance, collaboration, and purpose.
Eric is a designer and researcher exploring how emerging technologies reshape the human experience at work.
For over 20 years, he worked across art, design, and technology, including strategy and innovation roles at Google and Twitter, where he was brought in to establish human-centered design practices inside technical teams and operational organizations.
That work led to deeper questions about how systems affect motivation, meaning, and wellbeing in the workplace. He holds both a bachelor's and a master's degree in Industrial Relations from Rutgers University, with a focus on leading organizational change.
How does it actually feel to work with AI? We study how artificial intelligence is reshaping the employee experience, focusing on job stress, surveillance, collaboration, autonomy, and meaning.
This study was initially inspired by gig work, where algorithmic control, reduced autonomy, and emotional exhaustion are well documented. While the main sample focuses on more traditional sectors, the emotional patterns we measured—especially around stress, surveillance, and purpose—were originally designed to test whether gig-like conditions were spreading elsewhere. In that sense, gig work may be the canary in the coal mine for how AI is reshaping the emotional reality of labor.
Using two short but revealing scales (Task Level Experience Scale and Reflective Work Experience Inventory), the study captures the day-to-day realities and deeper reflections of workers navigating AI's rise.
Much of the existing research on AI in the workplace emphasizes productivity outcomes. This study centers on people. By surfacing how workers feel about AI, not just what they do, the goal is to support more ethical and human-centered design and policy choices.
After completing a pilot study with 17 participants to refine the measures, we launched a broader survey across multiple platforms.
In April 2025, we successfully collected responses from over 300 workers across sectors. This expanded dataset is now being analyzed to uncover patterns in AI’s impact on autonomy, stress, collaboration, surveillance, and purpose.
The full dataset, codebook, and working paper are published on the "Measuring the Human Experience of AI in the Workplace" OSF project page.
Cite this work:
Rhodes, E. P. (2025). Measuring the Human Experience of AI in the Workplace. Future of Work Lab. OSF Preprint. https://osf.io/wxzby
Although gig workers weren’t included in the study sample, the emotional and structural patterns commonly associated with gig labor (like surveillance, reduced autonomy, and eroded meaning) appeared consistently across service, civic, and knowledge roles. This suggests that what began in the gig economy is now diffusing into more traditional jobs. These findings validate the study’s original hypothesis and elevate gig work as an early indicator of AI’s broader impact on emotional life at work.
Our research unfolded in two phases: a small-scale pilot study and a large-scale survey. Below are the most compelling patterns we’ve observed so far.
“The real fear isn’t losing jobs to AI. It’s losing agency over how it’s used.”
Preliminary finding, Future of Work Lab
These initial insights formed the foundation for the expanded dataset (n = 300+), which was designed to test and deepen these observations.
These themes will be explored in detail in the forthcoming working paper, to be published alongside the dataset and code on the Open Science Framework.
These findings underscore the need for more nuanced, worker-centered approaches to AI design and implementation. Effective strategies should invite participation, preserve autonomy, and build trust.
If you're interested in learning more about the research, collaborating on related projects, or simply want to connect, Eric welcomes inquiries from researchers, designers, and curious minds alike.
This study is being conducted independently and adheres to established ethical guidelines for research with human participants. It was originally submitted for IRB review at Rutgers University but has since been withdrawn. The researcher has completed formal ethics training, including:
All participant data is de-identified and managed using open science and privacy best practices.
Want early access to results? Subscribe here.