Not everything we call AI is actually ‘artificial intelligence’.
Late last month, AI, in the form of ChatGPT, broke free from the sci-fi speculations and research labs and onto the desktops and phones of the general public. It’s what’s known as a “generative AI” – suddenly, a cleverly worded prompt can produce an essay or put together a recipe and shopping list, or create a poem in the style of Elvis Presley.
While ChatGPT has been the most dramatic entrant in a year of generative AI success, similar systems have shown even wider potential to create new content, with text-to-image prompts used to create vibrant images that have even won art competitions.
AI may not yet have a living consciousness or a theory of mind popular in sci-fi movies and novels, but it is getting closer to at least disrupting what we think artificial intelligence systems can do.
Researchers working closely with these systems have swooned under the prospect of sentience, as in the case with Google’s large language model (LLM) LaMDA. An LLM is a model that has been trained to process and generate natural language.
Generative AI has also produced worries about plagiarism, exploitation of original content used to create models, ethics of information manipulation and abuse of trust, and even “the end of programming”.
At the centre of all this is the question that has been growing in urgency since the Dartmouth summer workshop: does AI differ from human intelligence? Read On:
Comments
Not everything we call AI is actually ‘artificial intelligence’. — No Comments
HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>