Questions are being raised about generative artificial intelligence

What is AI

Artificial intelligence is about use of machine learning and algorithms to analyse data in order to make decisions on that data. It is more so about recognising and identifying patterns in the data presented to the algorithm based on what it has been taught.

This is primarily used with speech-to-text, machine translation, content recommendation engines and similar use cases. As well, it is being used to recognise objects in a range of fields like medicine, photography, content management, defence, and security.

You may find that your phone’s camera uses this as a means of improving photo quality or that Google Photos uses this for facial recognition as part of indexing your photos. Or Netflix and other online video services use this to build up a “recommended viewing” list based on what you previously watched. As well, the likes of Amazon Alexa, Apple Siri or Google Assistant use this technology to understand what you say and create a conversation.

What is generative AI

Generative artificial intelligence applies artificial intelligence including machine learning towards creating content. Here, it is about use of machine learning, typically from different data collections, and one or more algorithms to create this content. It is best described as programmatically synthesising material from other material sources.

This is underscored by ChatGPT and similar chatbots that use conversational responses to create textual, audio or visual material.  This is seen as a killer app for generative AI. But using a “voice typeface” or “voice font” that represents a particular person’s voice for text-to-speech applications could be a similar application.

Sometimes generative AI is used as a means to parse statistical information in to an easy-to-understand form. For example, it could be about an image collection of particular cities that is shaped by data that has geographic relevance.

The issues that are being raised

Plagiarism

Here, one could use a chatbot to create what apparently looks like new original work with material from other sources without attributing the content creators for the material that existed in these sources.

Nor does it require the end-user to make a critical judgement call about the sources or the content created or allow the user to apply their own personality to the content.

This affects academia, journalism, research, creative industries and other use cases. For example, education institutions are seeing this as something that impacts on how students are assessed, such as whether the classic written-preferred approach is to be maintained as the preferred approach or to be interleaved with interview-style oral assessment methods.

Provenance and attribution

It can also extend to identifying whether a piece of work was created by a human or by generative artificial intelligence and identifying and attributing the original content used in the work. It also encompasses the privacy of individuals that appear in work like photos or videos; or where personal material from one’s own image collection is being properly used.

This would be about, for example, having us “watermark” content we create in or export to the digital domain and having to identify how much AI was used in the process of creating the content.

Creation of convincing disinformation content

We are becoming more aware of disinformation and its effect on social, political and economic stability. It is something we have become sensitised to since 2016 with the Brexit referendum and Donald Trump’s election victory in the USA.

Here, generative artificial intelligence could be used to create “deepfake” image, audio and video content. An example of this being a recent image of an explosion at the Pentagon, that was sent around the Social Web and had rattled Wall Street.

These algorithms could be used to create the vocal equivalent of a typeface based on audio recordings of a particular speaker. Here, this vocal “typeface” equivalent could then be used with text-to-speech to make it as though the speaker said something in particular. This can be used as a way to make it as though a politician had contradicted himself on a sensitive issue or given authority for something critical to occur.

Or a combination of images or videos are used to create another image or video that depicts an event that never happened. This can involve the use of stock imagery or B-roll video mixed in with other material.

Displacement of jobs in knowledge and creative industries

Another key issue regarding generative artificial intelligence is what kind of jobs this technology will impact.

There is a strong risk that a significant number of jobs in the knowledge and creative industries could be lost thanks to generative AI. This is because the algorithms could be used to turn out material, rather than having people create the necessary work.

But there will be a want in some creative fields to preserve the human touch when it comes to creating a work. Such work is often viewed as “training work” for artificial-intelligence and machine-learning algorithms.

It may also be found that some processes involved in the creation of a work could be expedited using this technology while there is room to allow for the human touch. Often this comes about during editing processes like cleaning-up and balancing audio tracks or adjusting colour, brightness or contrast in image and video material with such processes working as an “assistant”. It can also be about accurately translating content between languages, whether as part of content discovery or as part of localisation.

There could be the ability for people in the knowledge and creative industries to differentiate work between so-called “cookie-cutter” output and artistic output created by humans. This would also include the ability to identify the artistic calibre that went in to that work.

The want to slow down and regulate AI

There is a want, even withing established “Big Tech” circles, to slow down and regulate artificial intelligence, especially generative AI.

This encompasses slowing down the pace of AI technology development, especially generative AI development. It is to allow for the possible impact that AI could have on society to be critically assessed and, perhaps, install “guardrails” around its implementation.

It also encompasses an “arms race” between generative-AI algorithms and algorithms that detect or identify the use of generative AI in the creation of work. It will also include how to identify source material, or the role generative AI had in the work’s creation. This is because generative AI may have a particular beneficial role in the creation of a piece of work such as to expedite routine tasks.

There is also the emphasis on what kind of source material the generative AI algorithms are being fed with to generate particular content. It is to remind ourselves of the GIGO (garbage in, garbage out) concept that has been associated with computer programming where you can’t make a silk purse out of a sow’s ear.

What can be done

There has to be more effort towards improving social trustworthiness of generative AI when it comes to content creation. It could be about where generative AI is appropriate to use in the creative workflow and where it is not. This includes making it feasible for us to know whether the content is created by artificial intelligence and the attribution of any source content being used.

Similarly, there could be a strong code of ethics for handling AI-generated content especially where it is used in journalism or academia. This is more so where a significant part of the workload involved in creating the work is contributed by generative AI rather than it being used as part of the editing or finishing process.

Leave a Reply