Editorial
Keywords:
editorialAbstract
Since we last published (in the middle of December), generative AI continues to make the headlines, particularly with Google’s attempts to satisfy the demands for “diversity”, by generating images of black, female Nazi soldiers. Needless to say, embarrassment all round. The feeling is, I suppose, that if the system can be trained to get things so wrong with images, what might be going on with text? Certainly biases must exist there, given the amount of training material that is used in these systems, and the fact that, as all of it is on the Web, all of the conspiracy materials that is out there is probably contributing. Certainly, the systems get information about people wrong, simply because so many people have the same names. There are, for example, lots with the name Tom Wilson, or even T.D. Wilson, and when I asked for a biography of myself, one of the systems told me I was dead! Another said that I had worked in places I’d never even visited. No doubt training will become more and more sophisticated over time, as the flaws and biases are discovered, but given AI’s ability to invent, on the basis of what it has learnt, I’ll be very cautious in using it for the foreseeable future.
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Thomas D. Wilson
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
https://creativecommons.org/licenses/by-nc-nd/3.0/