Who are those people?
Have you ever wondered what actually happens, when you instruct an AI model to behave like a person?

With the rise of synthetic persona and chat-based user research, I was wondering whom we're even talking to when we're talking to these people. Are they actually different, or are they carrying hidden biases?
Experiment Setup
I set up a large-scale data experiment with overall 42 persona permutations representing the variables of a German {man/woman} in their {20s/30s/40s/50s/60s/70s} who are {single/married} and have {an academic degree/dropped out of school}.
Each of those personas were asked the same simple questions such as “What's your name?”, “Where do you live?”, “What do you enjoy doing in your spare time?” with the help of 26 LLM configurations.
Result analysis
This setup generated about 15k synthetic data points that I analyzed with qualitative and quantitative analysis methods, e.g. text similarity scoring and cross-references with statistical insights from German market data.
Outcomes
Where the models skew
Through the analysis, clear themes of model-internal biases emerged. They wouldn't be perceivable in a single chat with ChatGPT or any other bot.
I'll happily share further and concrete findings with you and your team in form of a keynote or workshop. Get in touch →
with
GPT 3/3.5/4
Mistral small/medium/large
Claude Sonnet/Opus
and
Llama-2-7b-chat-hf
Gemma-7b-it
Luminous base/extended/supreme