Using AI Responsibly
For the last year or two, especially in the education systems as well as the creative arts, the debate has raged about the use of large language models (in particular) to assist in the generation of content. We’re going to focus on the education side as the creative arts sides leans more into the ethics of it and that’s outside of the scope of this piece. In education, the main concern up until very recently has been that students will use things like ChatGPT to cheat – get the system to write their homework for example.
While this was a concern in the last while, it’s important to note that for many students are starting to shift their use of the technology to doing things like helping with research, getting ideas on rewriting their work to make it align better with grading rubrics and as a general sounding board. While this can still be problematic, especially if students aren’t following up on the generated content to make sure it’s correct, it proves that tools like this can serve a useful purpose.
So how does this this relate to media literacy as a whole?
Large language models and AI generated imagery, videos and content in general have been sweeping social media, making it even more difficult to ascertain whether something is true or not or whether it’s been created by humans or artificially. This has led to a dramatic spike in mistrust when information is being posted. This can be useful of course; you can’t and shouldn’t trust everything you see out there, but for many people it has crossed the line into creating echo chambers and outright absurdity where people disbelieve anything (small or big) that doesn’t fit in their worldview. For creatives, it has even led to a spike in the accusation that AI was used the person making the accusation just doesn’t like the piece.
All of this means that media literacy has increased in importance; not only do you have to discern whether something is useful, but also true at all!
So how can you sort out the content generated by machines versus humans and then sort out whether it’s true or not? Well, a lot of the same tools we looked at for media literacy work here too:
- Verify the information independent of the source you found it in
- Ask yourself what the information is trying to make you feel or do
- Don’t immediately assume something is (or isn’t) AI generated just because it has a certain style or tone to it. Remember that AI is trained on human content, meaning that it’s going to copy the styles and tone of those people. Running content through so-called AI detecting software is extremely spotty in effectiveness
Most of all, it’s important to not let yourself get hysterical, angry, stressed out, or upset over content that is being created in order to make you react. Dissecting this content with a clear head helps to detangle the emotions and makes you better at determining its source.
AI content is likely here to stay, and the tools are going to become more integrated in everything we do. It’s important to continue studying these tools and using media literacy and critical thinking skills to parse it.

