The development, use, and management of AI tools pose a number of major legal, ethical, philosophical, and environmental problems and issues, including concerns surrounding intellectual property, energy use, labor, privacy, bias, human agency, and potential misuse of tools.
The AAC&U Student Guide to AI summarizes some of these concerns on the "AI Ethics" page of their guide (link opens in new window). And a key paper that outlined many of the problematic issues of large language models before the public launch of ChatGPT in November 2022 is, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big" (March 2021).
Being AI literate means having an understanding of both the risks and rewards that these tools bring to society. Below are brief descriptions of some of these concerns, with links out to more information.
Accuracy
Generative AI models can produce inaccurate outputs, often due to limitations in their training data and the probabilistic nature of the generation process. A key problem is "hallucinations," which occur when AI produces false or misleading information that is presented as a fact. The "voice" of the AI system presents as confident and authoritative, making such false information often difficult to detect unless the user is an expert. The outputs can be plausible-sounding but completely fabricated, making it challenging to discern truth from fiction without external verification. An example of a hallucination is when AI generates made-up sources, with perhaps a real journal title and real author name, but a made-up article title and page range. It's important to remember that text generators like ChatGPT, Gemini, and Copilot are designed to generate grammatically correct text, but not accurate information or sources. All information must be verified by authoritative sources.
Transparency
Issues of transparency (or lack of transparency) arise across multiple AI topics. For companies creating AI systems, being transparent about training data and processes helps to build trust and expose potential biases. For users of AI systems, being clear about AI-use reinforces accountability, human agency, and promotes responsible engagement with information. Most scholarly journals, for example, have clear policies regarding whether, how, and in what situations AI-use is permitted, and provide guidance on how to acknowledge that use. If it is appropriate to use AI for a project, you might consider some different ways of being transparent about your AI use. Ideas include: notating the text to show different types of AI-use, providing a short paragraph (perhaps in a Methods or Acknowledgments section) that explains your AI use, or using formal citation methods if appropriate.
Bias
The data that AI is trained on can often reflect existing societal inequities and historical prejudices. This means that AI outputs might be stereotypical or discriminatory, thereby perpetuating or amplifying biases that exist in the real world. These biases can have critical impacts on many potential uses of AI, including in areas such as healthcare and criminal justice. The "black box" nature (that is, lack of transparency) of many generative AI models can make these biases difficult to pinpoint or correct. It's always important to be aware of the risk of bias when using generative AI.
Depending on the AI system and what version you are using, you may not have knowledge or control over how the information you provide to the system could be accessed and used. Here are some tips to protect the privacy of your own data and that of others:
AI models are often trained on copyrighted works without the permission of the original creators of those works, posing a number of challenges to current copyright law. When possible, use AI tools that are transparent about their training data, and educate yourself on the range of legal risks and concerns. Here are some resources to help:
Environmental Impact
Many AI systems run on GPUs (Graphics Processing Units), which tend to have much higher energy consumption levels than CPUs (Central Processing Units), leading to a higher carbon footprint and therefore impacts to climate change. In addition, large data centers can consume large amounts of water for cooling purposes, and can therefore put a strain on local water supply. The ecological footprint of AI presents a growing sustainability problem that needs urgent attention, and it is important to avoid thinking of AI systems as limitless resources that can be used for trivial purposes.
Labor Impacts & Human Agency
Generative AI intersects with labor issues in many different ways. For one, generative AI systems rely on "invisible labor," often performed by low-wage workers in the Global South who are tasked with labeling and moderating training data. In addition, many are concerned about the potential for AI to displace jobs, and also excited about new job categories related to AI development, maintenance, and oversight. Many believe that generative AI will reshape the nature of work and require reevaluation of labor protections, skill development, and equitable distribution of benefits and burdens.
What Is AI Detection?
AI detection tools (or AI detectors) are tools that claim to determine whether a piece of content (e.g., text, image, code) was generated by AI. In educational settings, they are often used to detect potential AI-generated or AI-assisted writing and are sometimes viewed as a way to preserve academic integrity by distinguishing between human-authored and AI-generated content.
However, no AI detection tool is 100% accurate. All of them are susceptible to false positives (i.e., flagging human-authored content as AI-generated) and false negatives (i.e., failing to detect actual AI-generated content). While some tools claim to be 99% accurate, research has shown that such claims are often exaggerated. A few tools may exceed 90% accuracy in certain conditions, but others perform no better than chance, which essentially is as reliable as flipping a coin.
In addition to inconsistency in accuracy, there are several other concerns when relying on AI detection tools, including:
Ultimately, while AI detection tools can offer helpful signals, they should not be used as the final arbiter of whether content was AI-generated. Context, transparency, and human judgment remain essential in interpreting and responding to their results.
How AI Detection Works
Most AI detectors operate using proprietary, black-box algorithms, meaning the internal logic or criteria used to make judgments about a piece of content are not publicly disclosed. This lack of transparency makes it difficult for users to fully understand how decisions are made or to independently verify the accuracy and fairness of the results.
At a high level, AI detection tools are based on statistical stylometry (i.e., the analysis of writing style through quantifiable features). These tools scan the input for statistical patterns that may differ between human-authored and AI-generated content. For example, common features analyzed in texts include:
Nevertheless, it’s important to emphasize that there is no standardized method or agreed-upon benchmark for AI detection. Because each tool uses its own detection criteria and thresholds, it is not uncommon for the same content to receive conflicting results from different detectors. This variability raises questions about reliability and consistency, especially when these tools are used in high-stakes environments like education or publishing.
Limitations & Challenges
A wide body of literature has shown several limitations and challenges in AI detection, for example:
1. Model Drift & Arms Race
"... this benchmark understanding is missing in the literature, and hence it is difficult to build a universal classifier that can detect AI-generated text across various domains."
Agarwal, A., & Uzair, M. (2025). Robustness of Classifiers for AI-Generated Text Detectors for Copyright and Privacy Protected Society. In International Conference on Pattern Recognition (pp. 55-71). Springer, Cham. https://doi.org/10.1007/978-3-031-78498-9_5
"Detecting AI-generated images is increasingly challenging owing to the continual development of more image generative models that produce better and higher-quality images."
Park, D., Na, H., & Choi, D. (2024). Performance comparison and visualization of ai-generated-image detection methods. IEEE Access. https://doi.org/10.1109/ACCESS.2024.3394250
2. Bias Against Non-Native English
"GPT detectors frequently misclassify non-native English writing as AI generated, raising concerns about fairness and robustness."
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7). https://doi.org/10.1016/j.patter.2023.100779
"In other words, one in four non-native authors who use AI to help refine their text is at risk of being accused of having submitted an entirely AI-generated text by GPTZero, while the risk for native authors is closer to one in ten."
Pratama, A. R. (2025). The accuracy-bias trade-offs in AI text detection tools and their impact on fairness in scholarly publication. PeerJ Computer Science, 11, e2953. https://doi.org/10.7717/peerj-cs.2953
3. Privacy and Data Protection
"While we retain a copy of submitted text, we do not reproduce the text or disclose it to third parties. This means while a copy of your submission is stored, it is never shown to a third party and you retain ownership of the submission."
GPTZero (2025). GPTZero FAQs: Do I retain ownership of the work after passing it through GPTZero?
Library Administration: 631.632.7100
Except where otherwise noted, this work by SBU Libraries is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.