RUIN AND BEAUTY

DEENA METZGER'S BLOG

Is this Our Future? On Lavender and Where is Daddy

If you glance over the last year of essays I have written on SubStack, you will see those I wrote because I was/ am alarmed by AI and hoped we would find ways, yes, naively, hoped that despite all, we would find ways to stop AI before it stops us.  But when I fell in alongside the rapid growth pattern of AI, I understood that what I hoped, and I was not the only one hoping or assuming, would not be fulfilled as it is taking a startlingly short time for AI to reach the ability to step out on its own, impersonate someone, take over an election, steal, abuse and kill because all that is required is the request by a human being and algorithms as it is becoming clear that there are no restraints on the human or the technology.

Even as I understood that AI could not be stopped in its development, and that it was clearly already out of our hands, I still hoped that we might somehow modify it or influence it by embedding ethical concerns into AI as it grew up, developed, matured and went out on its own. But this has also proved impossible.  It has only taken one year and fifteen days to go from thinking AI could be stopped to recognizing it has run amuck.

On April 3, 2023, I urged AI engineers, Tristan Harris and Aza Raskin, to change the code of the golem they admitted they had created , as the myth they cited advises. I was really hoping that we could begin, out of concern, no, out of awareness that we cannot survive unless Earth does, no, out of unimpeded, unlimited, unprecedented love for this awesomely beautiful and unimaginably intelligent planet, to embed ethical understanding and allegiance to the natural world into every aspect of AI so that it would be resonant with the deepest nature of all life, to embed so clear and unarguable an ethos that AI would incorporate a benevolent nature, would then, shall we say, be natural, would be another species sharing in and protecting the ecoheart of the planet.

Why not?

Really, why not?

But today, I see no likelihood of any limitation on those who want to use AI for their most dangerous and terrifying purposes enacted against humans and Earth and whose actions, without doubt, set a precedent for horrors to come. I learned this at a webinar, “In Context: Israel’s AI Warfare Tactics + Efforts to Halt Arms Transfers,” sponsored by the organization Just Vision and +972 the Israeli/Palestinian news journal which broke the story of Lavender

Smoke rises after Israeli airstrikes in Beit Lahia, in the northern Gaza Strip, December 28, 2023. (Yonatan Sindel/Flash90)
Smoke rises after Israeli airstrikes in Beit Lahia, in the northern Gaza Strip, December 28, 2023. (Yonatan Sindel/Flash90)

‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza

The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties…

“Where’s Daddy?” also revealed here for the first time, used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.

*** 

Imagine this with me.  Tens of thousands of alleged … terrorists?… tens of thousands of probably mostly innocent people, including journalists, are identified and targeted by an AI program, Lavender, assumed to be guilty and so tracked as they walk or drive home by a program called Where’s Daddy, which waits for these who were alone to enter their homes, apartment buildings or village complexes and only when the suspect, family and neighbors are gathered drops a (cheaper) dumb bomb by drone that kills them all.  

Where does this software come from?  Consider that employees of Google are claiming mental strain and illness as a result of working on such projects  and have been arrested and fired for staging a sit-in protesting Google’s (with Amazon)  1.2 bn Cloud contract, Nimbus, with the Israeli government.

GABRIEL SCHUBINER “The campaign really is driven by worker concerns and worker needs around the ethical use of our labor, as well as the direct workplace concerns of the health and safety concerns around working at a company that is facilitating genocide.” 1

*** 

Some months ago, a Palestinian woman in Gaza asked if we could imagine that her daughter, 5 years old, might someday be one of the peacemakers.  This week, she said that she and her daughter dared not go outside to look at the stars as any movement, even of their eyelids, could bring down a drone, a dumb bomb, and end their lives.

*** 

But dear friends, you know hope is indomitable.  Still, it is not hope that matters but possibility and our devotion to it.  We may not be able to alter or transform AI but we still have an obligation to Earth as long as we are alive.  And to all life.  Rather than focusing on AI, our commitment to living on behalf of all life at each moment may be the critical medicine. We are at a challenging threshold – we, all beings, human and more than human, will all survive into the future — or we will all die.  Our fate may be decided by AI, but, also, it may be in our hands. Who is it who must embody the ethos that can save all life?

***

A friend, who is a musician, was attending a peace conference.  One day, improvising on the piano, she was approached by one of the attendees. “You know,” he said, “If this music could be embedded in AI software it would take it right down.”

1

https://www.democracynow.org/2024/4/17/no_tech_for_apartheid_google_israel

Leave a comment