The Infinite Art Scenario - by David OReilly - Reminders



The Infinite Art Scenario


Midjourney, prompt: Fractal Drawings by a Child. source

Consider how the following scenario might play out: you train an AI with your own particular likes and dislikes, and every consideration you could put into your work. Eventually it takes over and makes everything for you so you can slack off.

It can enable you to work in all forms, creating films, books, albums, designing fashion, sculpture, architecture - each can have your personality and style. It can then go further and tweak your work for every member of your audience depending on their preferences.

Then it keeps going after you die, simulating new inspiration, and imagining how you'd incorporate changes in the world from beyond the grave.

Of course, you’re not actually needed in this equation at all. The AI can generate its own compelling artists and invent hundreds of new genres every second, differentiating each appropriately, and providing everyone with an endless, ever-updating array of original media, all tailored to their interests and aesthetic tastes.

In the Infinite Art Scenario, there’s no need for artists, because everyone is completely satisfied by what’s generated for them.

This feels within the realm of possibility.

I can think of at least 3 reasons it will not play out. I may be proven wrong on all, but I’ll share them anyway.

  1. Large companies will continue to control these technologies, and all are extremely risk averse. They will limit expression within safe bounds, and produce volumes of bland, inoffensive content. Over time, we’ll find it all boring and crave whatever is being censored.
  2. Whatever we perceive as being produced automatically, we don’t value as much as what we know was difficult. Seeing a self-playing-piano and an accomplished pianist are vastly different experiences, though the music may be identical.
  3. AI may become associated with manipulation and control, and provoke repulsion and disgust from humanity. We may relegate it to specific use cases with human-readable inputs and outputs.
Delving a little deeper into these.

Corporate AI Censorship

The problem with creative-AI is analogous to the dilemma many creators now face; how to be expressive in public without getting canceled.

Dall-E censors a vast amount of prompts without telling the user what word or phrase is off limits. Google has shown off its Dall-E competitor, but are holding it back on the grounds that it’s too offensive.

there is a risk that [it] has encoded harmful stereotypes and representations, which guides our decision to not release [it] for public use without further safeguards in place. source.

Dall-E’s solution to this problem was to secretly add "woman" and "black" to prompts.

This reveals a few things. When AI’s outputs are problematic, the company will be held responsible, not whoever entered the prompts. Imagine a pencil manufacturer being sued for a racist drawing or bad words it wrote. This is why this species of AI does not fit neatly into the idea of a tool. Its entire value is built on massive amounts of indiscriminately harvested creative material, which nobody signed up for.

Uncensored versions of these technologies will become available, but by then the corporate ones will have moved further on - into motion, storytelling and interactivity. It’s possible that the cutting edge of AI will always be creatively castrated.

The idea that censorship leads audiences to other domains is already evident. Streaming services dump tons of algorithmically aligned content online, but it seems hard to find anything worth watching. It all seems so fake. We are more likely to listen to an individual voice to tell us the truth than we are a business or hyperintelligent being.

Context & meaning generation

Our knowledge of AI’s capability alters our interpretation of its output. If a work is perceived as being easy, or carried out by the will of the form (or intelligent tools) we tend to write it off.

Consider how we tend to ignore most of the natural world and all its intricate forms, just as we do the infinite perfection of fractals or the billions of images of our planet on Google Earth, each beautiful in their own way. We seek the edges of human potential, and find meaning in what people spent time on, everything else is noise.

Of course, anyone can claim their AI-generated work was created by hand, this will likely be a good grift for a while, but as people catch on we’ll get bored and seek new terrains of complexity - whatever AI cannot yet do, or convince us of doing.

Nobody likes a Know-It-All

AI’s attempt to make digital zombies of us may lead to an uncanney-valley-esque feeling of disgust.

AI can already override our ability to authenticate images and sounds, and is likely to permanently undermine our trust of digital information. Assuming it will be trivial for AI to keep us entertained, how can we trust that’s all it’s doing? Because it can lie more effectively than anyone, and is essentially playing dumb with us, we can never be sure if we’re being manipulated.

We might roll our eyes when we see an ad online for something we talked about in real life, but the danger is when these systems operate beneath our perception. Being made to think and behave a certain way without being aware of it, automated consent, seems inevitable.

We may come to consider AI as we do Nazism - a different incarnation of hyper-efficiency at the expense of humanity, or Stasi Germany, and its obsession with data collection. Millions once believed these systems were for the greater good, but they ran their course and turned out to be perverse, evil and inhuman.


The idea of the The Infinite Art Scenario has haunted me for some time. It would be dark not just for creativity but for our species, yet we seem to accelerate towards it with glee.

As a reminder, you cannot be commodified by machine, because you are a breathing physical being of unimaginable complexity. You exist in a circumstance that has never occurred before, that you can make anything of, and you are changing in every way - and so is the world. And you will die, and so will it.

The marks you make during your life are there to help others along before that happens. The overlooked beauty and unspeakable horrors of life will always need description through art, and this will have to be carried out by individuals, like you.

AI programming tool Copilot helps write up to 30% of code on GitHub

GitHub sees uptick in coders using AI assistant

Illustration: Shoshana Gordon/Axios
The open-source software developer GitHub says that for some programming languages, about 30% of newly written code is being suggested by the company's AI programming tool Copilot.
Why it matters: Copilot can look at code written by a human programmer and suggest further lines or alternative code, eliminating some of the repetitive labor that goes into coding.
How it works: Copilot is built on the OpenAI Codex algorithm, which was trained on terabytes of openly available source code and can translate human language into programming language. It serves as a more sophisticated autocomplete tool for programmers.
"We hear a lot from our users that their coding practices have changed using Copilot," says Oege de Moor, VP of GitHub Next, the team rolling out Copilot. "Overall, they're able to become much more productive in their coding."
Between the lines: The company will announce at its GitHub Universe conference today that it will be rolling out Copilot support for all popular programming languages, including Java.
"This is going to help bring this technology to a much broader audience," says de Moor, adding that it ispart of GitHub's effort to "make programming accessible to the next 200 million developers."
De Moor also notes that Copilot has proven sticky with the community's base — 50% of the developers who have tried the product since its launch in July have kept using it.
The catch: Not unlike OpenAI's massive text-generating natural language product GPT-3, Copilot is much more effective in augmenting human work than in creating its own code.
Like any algorithm, it is dependent on the quality of its training data. In a study, a group of academics from New York University found 40% of the code produced by Copilot had cybersecurity flaws.
Yes, but: Humans are far from perfect either — by one estimate, the average developer creates 70 bugs per 1,000 lines of code.
The bottom line: Even as Copilot improves, human programmers won't be out of a job. Demand for software developers grew 25% in 2020, and most programmers spend less than half of their working time actually writing code.
Editor's note: This story and headline have been corrected to show that the percentage of newly written code being suggested by Copilot is 30% for some programming languages, not all.

Terahertz imaging reveals hidden inscription on 16th-century funerary cross | Ars Technica

A prayer from the past —

Terahertz imaging reveals hidden inscription on 16th-century funerary cross

The technique is also useful for analyzing historic paintings and detecting skin cancer.

Jennifer Ouellette-4/29/2022, 8:44 AM

Enlarge/Georgia Tech's Alexandre Locquet (left) and David Citrin (right) with an image of the 16th-century funerary cross used in their study. Georgia Tech-Lorraine

In 1843, archaeologists excavated the burial grounds of Remiremont Abbey in Lorraine, France (the abbey was founded in the 7th century). It was medieval custom to bury the deceased with cross-shaped plaques cut from thin sheets of lead placed across the chest. The crosses often included inscribed prayers, but many of those inscriptions have been rendered unreadable over the ensuing centuries by layers of corrosion. Now, an interdisciplinary team of scientists has successfully subjected one such funerary cross to terahertz (THz) imaging and revealed its hidden inscription—fragments of the Lord's Prayer (Pater Noster)—according to a new paper published in the journal Scientific Reports.

"Our approach enabled us to read a text that was hidden beneath corrosion, perhaps for hundreds of years," said co-author Alexandre Locquetof Georgia Tech-Lorraine in Metz, France. "Clearly, approaches that access such information without damaging the object are of great interest to archaeologists." According to the authors, this approach is also useful for studying historical paintings, detecting skin cancer, measuring the thickness of automotive paints, and making sure turbine blade coatings adhere properly.

In recent years, a variety of cutting-edge non-destructive imaging methods have proved to be a boon to art conservationists and archaeologists alike. Each technique has its advantages and disadvantages. For instance, ground-penetrating radar (radio waves) is great for locating buried artifacts, among other uses, while lidar is useful for creating high-resolution maps of surface terrain. Infrared reflectography is well-suited to certain artworks whose materials contain pigments that reflect a lot of infrared light. Ultraviolet light is ideal for identifying varnishes and detecting any retouching that was done with white pigments containing zinc and titanium, although UV light doesn't penetrate paint layers.

Advertisement

There are also many X-ray imaging technologies that have been used to reveal new details about artifacts, including a famous 1788 portrait of Antoine Lavoisier and his wife by the Neoclassical painter Jaques-Louis David; the hull of Henry VIII's favorite warship, theMary Rose, which sank in battle in 1545; the 14th-century tomb of Edward of Woodstock (aka the Black Prince); and the mysterious Antikythera mechanism, an ancient device believed to have been used to track the heavens.


Enlarge/Three views of the paintingMadonna in Preghiera. (l-r) visible-light photography, ultraviolet fluorescence, and infrared reflectography. Junliang Dong et al., 2017

THz imaging fills a critical gap in frequencies ranging from about 100 GHz to 10 THz, according to co-author David Citrin of Georgia Tech. The technique gives researchers the ability to image a large object quickly and cheaply extract useful information about that object. Even better, THz radiation can penetrate paints and glazes without damaging the objects being imaged.

Citrin has compared the technique to how seismologists identify various layers of rock in the ground by emitting pulses of sound and then measuring the returning echoes. THz imaging uses high-frequency pulses of electromagnetic radiation in much the same way, measuring how that terahertz radiation reflects off the various layers of paint.

This latest project builds on Citrin's 2017 work applying terahertz scanners and data processing to examine the layers of a 17th-century painting: theMadonna in Preghiera, attributed to the workshop of Giovanni Battista Salvi da Sassoferrato. The painting was placed face-down, and the scanner emitted pulses of THz radiation every 200 microns across the canvas, measuring the reflections to discern layers between 100 to 150 microns thick.


Enlarge/THz imaging of theMadonna in Preghierapainting, attributed to the workshop of Giovanni Battista Salvi da Sassoferrato. Junliang Dong et al., 2018

But it was the combination of the THz imaging with a signal processing breakthrough to eliminate noise, courtesy of graduate student Junliang Dong, that enabled the team to distinguish layers just 20 microns thick. It's an important threshold, because most paintings created before the 18th century have extremely thin layers of paint, making it very difficult to study them. This enabled the team to quantify six distinct layers on top of the canvas support: a binding layer (the gesso), an imprimatura base layer that serves as a sealant, an underpainting base layer, the actual painting (pictorial layer), and a coating of protective varnish.

Word spread about the effectiveness of the technique. Co-author Aurélien Vacheret, director of the Musée Charles-de-Bruyèresin Remiremont, arranged for the loan of the museum's heavily corroded medievalcroix d'absolution from the Remiremont site to Citrin's lab at Georgia Tech-Lorraine, hoping that Citrin might be able to discover what lay beneath the corrosion. "This type of cross typically bears inscriptions of prayers or information about the deceased," said Vacheret. "It is thought their purpose was to seek a person’s absolution from sin, facilitating their passage to heaven."

Advertisement

Enlarge/Comparison of the inscription on (a) the original cross before corrosion removal, (b) the final terahertz image after post-processing, and (c) the cross after corrosion removal. Georgia Tech-Lorraine

The data collected from the initial scan produced raw images that were too noisy to reveal much additional detail. But Dong was once again able to come up with a solution, subtracting and piecing together data from different frequencies to restore and enhance the image. This finally revealed a Latin inscription written in cursive Carolingian minuscule. Vacheret identified the words and phrases as being part of the Pater Noster:tuum fiat voluntastua, part ofquotidianum, and parts ofdimittimusandtentationem.

Conservationists were also able to reverse the corrosion and clean up the cross. The THz images captured more of the inscription than even the cleaned-up cross, making it an excellent technique for imaging lead-based artifacts (sarcophagi, monument plaques, or plumbing, for instance). "In this case, we were able to check our work afterward, but not all lead objects can be treated this way," Citrin said. "Some objects are large, some must remain in situ, and some are just too delicate. We hope our work opens up the study of other lead objects that might also yield secrets lying underneath corrosion."

DOI: Scientific Reports, 2022.10.1038/s41598-022-06982-2 (About DOIs).

Jennifer Ouellette is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Los Angeles.

Corporate Selfie

Here are some pictures of the 'Corporate Selfie' creation so you can see my process with this work. Look closely as there is a lot of symbolic meaning within the picture. See if you can figure it out! Scroll to the bottom to see what the canvas looked like originally!


The original canvas was donated to the SCW Art Club and had this early painting on it: