Skip to main content Skip to navigation

AI and Social Conduct

AI can be a useful tool, but it can also impact how you feel and how you treat other people. AI is not objective; it reflects the biases in its training data, design, and the assumptions built into it.

This resource is designed to help you reflect on your use of AI and support you to protect yourself and others. It is focussed on social conduct, but you can read more about AI and academic integrity here.

[AI animation coming soon]

Red flags?

Here are 10 "red flags" that your AI use might be having a negative impact on your wellbeing or social conduct, and that it's time to pause and reflect, or seek support:

  1. You feel more anxious or paranoid after using it, perhaps feeling watched or targeted.
  2. You're using it to validate fears or suspicions about individuals or groups/communities of people.
  3. You feel compelled to keep checking it, have difficulty stopping, or find yourself getting increasingly irritated when you can't access it.
  4. You're substituting conversations with AI for conversations you would otherwise be having with people.
  5. Your trust in other people is shrinking, and you find yourself becoming suspicious about others or assuming they have bad intentions.
  6. You are using AI to rehearse or justify hostility towards others.
  7. You're trusting AI to make difficult decisions for you, such as interpersonal decisions about relationships or work disputes, significant financial decisions, or health choices.
  8. It is becoming your primary source of reassurance, or sense of meaning/purpose.
  9. You find yourself relying on it to manage your personal interactions, like writing an apology, becoming more uncomfortable managing those interactions without it.
  10. You feel worse about yourself after using it.

Practical guardrails

Here are 10 "practical guardrails" you could use to help keep AI use healthy and socially responsible:

  1. Set a clear purpose before you prompt, and stop when you've achieved it.
  2. Avoid run-on sessions, setting a timer if you find yourself losing track of time while using it.
  3. Keep AI out of high-stakes decisions, such as medical, legal, financial or relationship issues.
  4. Fact-check any important claims AI makes e.g. set a rule that the model should provide a reputable source for factual claims, which you then check yourself.
  5. Imagine that your sessions could be viewed by a close friend, and if you'd feel uncomfortable with that, reconsider.
  6. Prioritise your own voice and personality, both in your interactions with AI and amending any draft wording provided by it.
  7. Diversify your sources of support

    (family, friends, colleagues, your union, professional wellbeing services etc.) and ensure you're not relying on AI for advice and support.

  8. Check in with yourself regularly to ask if your use of AI is improving your wellbeing and social conduct, or making it worse.
  9. Ask AI for alternative perspectives ("What other explanations might there be?") as models can focus on reinforcing your pre-existing views/concerns.
  10. Have an exit plan, deciding in advance what you'll do if your AI use starts feeling unhealthy e.g. log off and go for a walk, or talk to a friend.

With applications of AI expanding rapidly, we are engaging with AI in many areas of everyday life where it may not be immediately obvious - social media feeds, search engines, online ads, healthcare tools, and educational platforms to name a few.

Examples of AI-related social misconduct

AI tools have made it much easier for people to create or amend realistic images, audio, or video of other people. What often begins as curiosity, experimentation, or humour can have harmful real-world consequences when AI is used to impersonate others (or simply obscure their own identity), generate ‘nudified’ or sexualised images of others, or create realistic deepfakes that depict people saying or doing things they never did. Such behaviour can result in humiliation, reputational harm, and loss of trust in a person or organisation, even if the creator didn’t intend any harm. In research commissioned by the Office of the Police Chief Scientific Advisor, three in five people said they were worried about being a victim of a deepfake. People's boundaries differ, and whilst one person might find a deepfake innocuous, others are deeply disturbed by any use of their image/likeness without their consent.

AI tools can also add scale, speed, and realism to interactions in ways that change their impact. This includes the ability to generate a large volume of comments or messages to fuel ‘flooding’ and ‘pile-ons' online, create and run dating profiles (“chatfishing”), or rapidly produce persuasive text to influence or overwhelm others. These behaviours can blur the line between individual expression and coordinated pressure, particularly when AI is used to simulate consensus or emotional engagement. A 2025 UN Women-commissioned global survey of women in the public sphere found that nearly a quarter (23.8%) of respondents had already experienced AI-assisted online violence, illustrating how quickly these scaled behaviours translate into real-world harm.

Users may also share confidential or sensitive information about other people with AI tools without their knowledge or consent, which can create privacy and security risks. Harmonic Security’s Q3 2025 analysis of three million prompts submitted to GenAI tools found that 26.38% of files uploaded contained sensitive information. AI-generated content often includes significant errors or ‘hallucinations’, which when shared onwards without fact-checking effectively borrow the user’s own credibility to spread misinformation. Related behaviours include passing off AI output as the user’s own in study or work, and using AI to create content that intentionally misinforms others (for example persuasive posts or “evidence” that is fabricated or selectively framed to affect public opinion on political and social issues) where the harm is amplified by the speed and ease with which AI can produce convincing material.

Opportunities to learn more & get involved

AI Ethics Now podcast

"AI Ethics Now is a podcast dedicated to exploring the complex issues surrounding artificial intelligence from a non-specialist perspective, including bias, ethics, privacy, and accountability. Join us as we discuss the challenges and opportunities of AI and work towards a future where technology benefits society as a whole.

This podcast was first developed by Dr Tom Ritchie and Dr Jennie Mills as part of The AI Revolution: Ethics, Technology, and Society module, taught as part of IATL at the University of Warwick."

AI Centre of Excellence

"The AI Centre of Excellence will accelerate the development of standards and processes for AI, and the sharing of best practice. Our vision is to integrate artificial intelligence seamlessly into teaching, research, and administration. We want to empower our community with the latest AI technologies to prepare for the future's challenges and opportunities."

AI & Academic Integrity

Understanding and demonstrating academic integrity is essential to your success as a student. It’s a core value and an expectation for all students at Warwick. 

The pages linked here were created to support you by offering clear guidance, useful resources, and links to further help if you have any questions or need advice.

Let us know you agree to cookies