Note Details
I welcome your call for critical thinking. Here’s a few reflections:
I hear you say that ai is threatening your livelihood.
That is likely true in some ways, as it was for textile workers in the 1800s, or language translators in the past two decades and now artists, graphic designers, writers, and likely even established businesses like Google search.
How do we respond to this disruption? We can fear it or resist it but as technology continues to develop it becomes harder to ignore its relevancy and integration into our lives.
So what do we do? Adapt.
As weavers and knitters have done (focusing on high craft/high cost alternatives). Or language translators (reviewing and editing translations in partnership with tech… like at unbabel).
How artists adapt and are empowered by ai’s emergence is a much more interesting conversation, to me personally, than why we should reject and fear it.
I also hear you say your art was referenced without your consent or compensation.
Humans are inspired by the artwork of other humans and create new artwork that emulates the style of others. Are you comfortable with this?
Your content on social media trains algorithms and generates revenue for the platforms, for which you are not compensated.
How do we respond to this broken system of value extraction?
Web3. Data sovereignty. Royalty micropayments. Etc
If we channel our attention and energy from resisting inevitable change to helping design and build new systems/tools then the outcomes we seek are more likely to happen.
Jenny what do you think should happen? What is a more beautiful and respectful path forwards, through your lens?
What does “anti-humanist” mean to you? What is an example of a pro-humanist machine / Do you think it’s possible for a machine to be pro-humanist?
Did the invention of a Wacom tablet hurt illustrators? Or camera hurt painters?
I totally get that. I feel empathy for its disruption of your art. I feel its impact on the work I do as well.
As a regenerative futurist, I’m committed to engaging in dialogue exploring protopian alternatives and using speculative design to explore and speak into new possibilities. We cannot create change if we can’t visualize and speak into it first.
I also ask questions often focused on definitions, as I feel they are at the heart of true understanding.
I hear you say you see ai as dangerous and harmful. I’m not here to try and change your mind — merely to reflect the value in defining what is/is not ai and the coinciding value of speaking into a solutionist perspective
Fwiw I have found great optimism in reading books like “Who Owns the Future” by Jaron Lanier and articles like the one below.
There is a potential path where technological advancements like ai help make society more prosperous, instead of individuals or monopoly corporations. A path where our art and our necessity for it to be monitized to fund our livelihood is decoupled. A path where our fundamental financial needs are met through something like UBI.
That’s not to say UBI isn’t frought with risk and the possibility of greater societal control/oppression. But it doesn’t necessarily have to be. It doesnt even need to come from government; there are many potential approaches for communal support.
As you’ve alluded to, it comes down to our consciousness. How we care for one another. What we value. Etc
That’s at the core imo of what influences whether these tools (technology = tools) are used for good or for bad. Help or hinders our humanity.
If we continue to focus our attention on expanding our consciousness and redesigning our societal systems… we’ll likely create a path where those in positions of influence make better decisions in support of the whole.
https://www.technologyreview.com/2015/06/16/11184/who-will-own-the-robots/
