Guilty Pleasure Bot – User Stories

“My guilty pleasure bot has become one of my most relied on assistants. I got myself a guilty pleasure bot after my kids and I agreed to enrol me in holistic and all including health monitoring following some delicate medical diagnosis, that meant I needed to make some drastic life style changes. I am all for these changes of course, but some days, I just really don’t feel up to it. I miss that cigarette after dinner, and I don’t manage to keep up with the movement requirements and daily check ins the systems encourage me to do. Of course I don’t want to drop out completely, I just need a bit more leniency, and I also on some days just don’t want to feel so controlled. I know that my kids would be upset with me if they would know about my lapses, and honestly, they invested a lot of money into this system, so I feel quite ashamed when I have to talk to the nurse about why my measurements are getting worse some days. My guilty pleasure bot just takes the edge off of those unpleasant encounters, and both me and my kids are happier this way!”

(Edgar*, London) *Name altered for annonymity reasons

Edgar purchased a guilty pleasure bot after he realised that his actions which are not aligned with the prescriptions of his doctor were putting a strain on his relationship with his kids. While his kids have only great intentions, he realised that he can’t always live up to their expectations of taking care of himself the way they see necessary.

His Guilt pleasure bot allows Edgar to take control over what he wants his doctors and kids to know and what not, and protects his sense of self and agency, without making him choose between full superveillance and no monitoring, but giving him the peace of mind that he has the final choice over how to present his life style.

This allows Edgar to spend his older years in harmony with his kids and without uncomfortable confrontations.


“I got my grandma a guilty pleasure bot after I moved town. My parents enrolled her into one of those sophisticated holistic care systems, that track her every move and action and evaluate how well she is taking care of herself, and whether some behaviours contribute to health risks and such stuff. Its working well some of the time, but some things it doesn’t seem to get right, and it reprimands her for stuff that is just faulty, or still very important for her, like the snaps she makes herself from that old recipe. The system tells her she shouldn’t drink alcohol, and for sure nothing she has brewed herself! And then when she gets a shwips, the camera predicts some kind of illness in her walk. This way, she can hold on to these small things, and be happier, and nobody else needs to know except me and her!”

Lindys grandma does not fit well into the normative formats of the holistic care system, which bases its health recommendations on normative metrics of acceptable behaviour. Cultural biases are impossible to root out of AI healthcare systems, which will favour certain social and cultural behaviours and surpress, judge and down value others, thereby favouring very specific interpretations of well being and health. For example, a system may assume that all patients are monolingual English speakers, leading to biases in diagnosis and treatment for patients who speak other languages, like in the case of Lindy’s grandma.

Using the guilty pleasure bot gives Lindys grandma the agency to influence how the holistic care system perceives her, and limits the effects of biased supervision.