tag:www.thegoodfuture.net,2013:/posts TheGoodFuture Blog (Futurist Gerd Leonhard) 2023-09-13T13:18:26Z Gerd Leonhard tag:www.thegoodfuture.net,2013:Post/2024459 2023-09-13T13:18:26Z 2023-09-13T13:18:26Z Opinion | Why Does Everyone Feel So Insecure All the Time?
“As the British political theorist Mark Neocleous has noted, the modern word “insecurity” entered the English lexicon in the 17th century, just as our market-driven society was coming into being. All the while, manufactured insecurity encourages us to amass money and objects as surrogates for the kinds of security that cannot actually be commodified — connection, meaning, purpose, contentment, safety, self-esteem, dignity and respect — but which can only truly be found in community with others.”

Opinion | Why Does Everyone Feel So Insecure All the Time?
https://www.nytimes.com/2023/08/18/opinion/inequality-insecurity-economic-wealth.html
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/2020208 2023-09-03T16:39:05Z 2023-09-03T16:39:06Z The Great Disruption Has Begun — Paul Gilding
“We tend to think of climate’s economic impact narrowly, for example the cost of climate disasters, or the loss of value in the fossil fuel industry. This dramatically underestimates the breadth and depth of climate change’s economic implications. Increasing costs for infrastructure, disruptions to supply chains, inflationary impacts, higher insurance costs, sovereign debt risks, geopolitical instability, failed states, military conflict and so on.”

The Great Disruption Has Begun — Paul Gilding
https://www.paulgilding.com/cockatoo-chronicles/the-great-disruption-has-begun
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/2018228 2023-08-29T20:56:35Z 2023-08-29T20:56:35Z A mistake to ask software to improve our thinking
“In short: it is probably a mistake, in the end, to ask software to improve our thinking. Even if you can rescue your attention from the acid bath of the internet; even if you can gather the most interesting data and observations into the app of your choosing; even if you revisit that data from time to time — this will not be enough. It might not even be worth trying.

The reason, sadly, is that thinking takes place in your brain. And thinking is an active pursuit — one that often happens when you are spending long stretches of time staring into space, then writing a bit, and then staring into space a bit more. It’s here here that the connections are made and the insights are formed. And it is a process that stubbornly resists automation.”

Why note-taking apps don't make us smarter
https://www.platformer.news/p/why-note-taking-apps-dont-make-us?utm_medium=email
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/2011196 2023-08-13T10:48:29Z 2023-08-13T10:48:30Z The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly
“But we’re not necessarily on an inevitable journey toward disaster. Hinton suggests a technological approach that might mitigate an AI power play against humans: analog computing, just as you find in biology and as some engineers think future computers should operate. It was the last project Hinton worked on at Google. “It works for people,” he says. Taking an analog approach to AI would be less dangerous because each instance of analog hardware has some uniqueness, Hinton reasons. As with our own wet little minds, analog systems can’t so easily merge in a Skynet kind of hive intelligence.

“The idea is you don't make everything digital,” he says of the analog approach. “Because every piece of analog hardware is slightly different, you can't transfer weights from one analog model to another. So there's no efficient way of learning in many different copies of the same model. If you do get AGI [via analog computing], it’ll be much more like humans, and it won’t be able to absorb as much information as those digital models can.””

The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly
https://www.wired.com/story/plaintext-geoffrey-hinton-godfather-of-ai-future-ai/
via Instapaper

]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/2010065 2023-08-10T17:20:29Z 2023-08-10T17:20:30Z Why Silicon Valley billionaires are prepping for the apocalypse in New Zealand
“First you acquired land in New Zealand, with its rich resources and clean air, away from the chaos and ecological devastation gripping the rest of the world. Next you moved on to seasteading, the libertarian ideal of constructing manmade islands in international waters; on these floating utopian micro-states, wealthy tech innovators would be free to go about their business without interference from democratic governments. (Thiel was an early investor in, and advocate of, the seasteading movement, though his interest has waned in recent years.) Then you mined the moon for its ore and other resources, before moving on to colonise Mars. This last level of the game reflected the current preferred futurist fantasy, most famously advanced by Thiel’s former PayPal colleague Elon Musk, with his dream of fleeing a dying planet Earth for privately owned colonies on Mars.”

Why Silicon Valley billionaires are prepping for the apocalypse in New Zealand
http://www.theguardian.com/news/2018/feb/15/why-silicon-valley-billionaires-are-prepping-for-the-apocalypse-in-new-zealand
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/2004423 2023-07-26T08:27:54Z 2023-07-26T08:27:55Z I thought fossil fuel firms could change. I was wrong
“Let’s remember what the industry could and should be doing with those trillions of dollars: stepping away from any new oil and gas exploration, investing heavily into renewable energies and accelerating carbon capture and storage technologies to clean up existing fossil fuel use. Also, cutting methane emissions from the entire production line, abating emissions along their value chain and facilitating access to renewable energy for those still without electricity who number in their millions.

Instead, what we see is international oil companies cutting back, slowing down or, at best, painfully maintaining their decarbonisation commitments, paying higher dividends to shareholders, buying back more shares and – in some countries – lobbying governments to reverse clean energy policies while paying lip service to change.”

I thought fossil fuel firms could change. I was wrong
https://www.aljazeera.com/opinions/2023/7/6/i-thought-fossil-fuel-firms-could-change-i-was-wrong
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/2000189 2023-07-15T07:30:35Z 2023-07-15T07:30:36Z Silicon Valley’s Quest to Build God and Control Humanity
“This brings me back to my days in Bible school. God’s love, we were told, was not visible the way love from people around us was, but closer examination (and faith) would reveal its presence. By accepting the existence of God’s love, we could grow and develop such that our true potential, our destiny, our capacity to be a better child/sibling/friend/neighbor/lover/human would be realized. Andreessen’s hypothetical AI love is different from God’s love, of course, because with AI, you get the transformative effects of a god’s personal intervention as well as the affirmation of something that undeniably interacts with you.”

Silicon Valley’s Quest to Build God and Control Humanity
https://www.thenation.com/?post_type=article&p=451204
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1997956 2023-07-09T14:52:40Z 2023-07-09T14:52:40Z How the generative A.I. boom could forever change online advertising

“Based on his limited experience with ChatGPT, the AI chatbot created by OpenAI, McKelvey said the technology fails to produce the kind of long-form content that companies could find useful as promotional copy.

“It can provide fairly generic content, pulling from information that’s already out there,” McKelvey said. “But there’s no distinctive voice or point of view, and while some tools claim to be able to learn your brand voice based on your prompts and your inputs, I haven’t seen that yet.””

How the generative A.I. boom could forever change online advertising
https://www.cnbc.com/2023/07/08/how-the-generative-ai-boom-could-forever-change-online-advertising.html
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1995035 2023-07-02T10:06:51Z 2023-07-02T10:06:51Z Who runs the world?
“But if the digital space itself becomes the most important arena of great power competition, with the power of governments continuing to erode relative to the power of tech companies, then the digital order itself will become the dominant global order. If that happens, we’ll have a post-Westphalian world – a technopolar order dominated by tech companies as the central players in 21st-century geopolitics.”

Who runs the world?
https://www.gzeromedia.com/by-ian-bremmer/who-runs-the-world
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1991534 2023-06-23T08:14:59Z 2023-06-23T08:15:11Z Contra Marc Andreessen on AI
“posted Marc’s question in a group chat, and just off the cuff, Tristan Hume, who works on interpretability alignment research at Anthropic, supplied the following list (edited for clarity):

I’d feel much better if we solved hallucinations and made models follow arbitrary rules in a way that nobody succeeded in red-teaming.

(in a way that wasn't just confusing the model into not understanding what it was doing).

I’d feel pretty good if we then further came up with and implemented a really good supervision setup that could also identify and disincentivize model misbehavior, to the extent where me playing as the AI couldn't get anything past the supervision. Plus evaluations that were really good at eliciting capabilities and showed smooth progress and only mildly superhuman abilities. And our datacenters were secure enough I didn't believe that I could personally hack any of the major AI companies if I tried.

I’d feel great if we solve interpretability to the extent where we can be confident there's no deception happening, or develop really good and clever deception evals, or come up with a strong theory of the training process and how it prevents deceptive solutions.”




Contra Marc Andreessen on AI
https://www.dwarkeshpatel.com/p/contra-marc-andreessen-on-ai?utm_term=0_e605619869-85042d89c9-%5BLIST_EMAIL_ID%5D
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1989624 2023-06-18T21:52:01Z 2023-06-18T21:52:02Z The Real Implications of Generative AI
“Then there is the human analogy error,” said Paul. “In other words, because these things seem like us, they chat like us, they interact with us, we can make the mistake that they are like us, that they evolve, that they have agency.”

“They don't have agency, they do not evolve," he said. “They are agents of us. They are part of something that we are creating.””

The Real Implications of Generative AI
https://peterleyden.substack.com/p/the-real-implications-of-generative?utm_campaign=post
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1987926 2023-06-14T13:00:45Z 2023-06-14T13:00:46Z Europeans Take a Major Step Toward Regulating A.I.
“The E.U.’s bill takes a “risk-based” approach to regulating A.I., focusing on applications with the greatest potential for human harm. This would include where A.I. systems are used to operate critical infrastructure like water or energy, in the legal system, and when determining access to public services and government benefits. Makers of the technology will have to conduct risk assessments before putting the tech into everyday use, akin to the drug approval process”

Europeans Take a Major Step Toward Regulating A.I.
https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1986065 2023-06-09T18:30:13Z 2023-06-09T18:30:14Z Why AI Will Save the World | Andreessen Horowitz
“The Lump Of Labor Fallacy flows naturally from naive intuition, but naive intuition here is wrong. When technology is applied to production, we get productivity growth – an increase in output generated by a reduction in inputs. The result is lower prices for goods and services. As prices for goods and services fall, we pay less for them, meaning that we now have extra spending power with which to buy other things. This increases demand in the economy, which drives the creation of new production – including new products and new industries – which then creates new jobs for the people who were replaced by machines in prior jobs. The result is a larger economy with higher material prosperity, more industries, more products, and more jobs.”

Why AI Will Save the World | Andreessen Horowitz
https://a16z.com/2023/06/06/ai-will-save-the-world/
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1983770 2023-06-04T18:03:03Z 2023-06-04T18:03:04Z Robot takeover? Not quite. Here’s what AI doomsday would look like
“Misinformation is the individual [AI] harm that has the most potential and highest risk in terms of larger-scale potential harms,” said Rebecca Finlay, of the Partnership on AI. “The question emerging is: how do we create an ecosystem where we are able to understand what is true? How do we authenticate what we see online?”

Robot takeover? Not quite. Here’s what AI doomsday would look like
https://www.theguardian.com/technology/2023/jun/03/ai-danger-doomsday-chatgpt-robots-fears
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1978941 2023-05-23T00:23:36Z 2023-05-23T00:23:37Z ‘There was all sorts of toxic behaviour’: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases
“What all this told her, she says, is that big tech is consumed by a drive to develop AI and “you don’t want someone like me who’s going to get in your way. I think it made it really clear that unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive.””

‘There was all sorts of toxic behaviour’: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases
https://www.theguardian.com/lifeandstyle/2023/may/22/there-was-all-sorts-of-toxic-behaviour-timnit-gebru-on-her-sacking-by-google-ais-dangers-and-big-techs-biases
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1976967 2023-05-16T20:04:29Z 2023-05-16T20:04:29Z Column: Afraid of AI? The startups selling it want you to be
“The great promise of OpenAI’s suite of AI services is, at root, that companies and individuals will save on labor costs — they can generate the ad copy, art, slide deck presentations, email marketing and data entry processes fast and cheap.”

Column: Afraid of AI? The startups selling it want you to be
https://www.latimes.com/business/technology/story/2023-03-31/column-afraid-of-ai-the-startups-selling-it-want-you-to-be
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1974503 2023-05-10T11:38:50Z 2023-05-10T11:38:51Z You Trained the Chatbot to Do You Job. Why Didn’t You Get Paid?
“Let’s imagine you called me with a problem, and I solved it,” says Danielle Li, an economist at MIT’s Sloan School of Management who coauthored the study with MIT PhD candidate Lindsey Raymond and Erik Brynjolfsson, director Stanford’s Digital Economy Lab. In a world without AI chatbots, that would create what economists call productivity. But in the ChatGPT era it also produces valuable data. “Now that data can be used to solve other people's problems, so the same answer has generated more output,” Li says. “And I think it's really important to find a way to measure and compensate that”



You Trained the Chatbot to Do You Job. Why Didn’t You Get Paid?
https://www.wired.com/story/should-you-get-paid-for-teaching-a-chatbot-to-do-your-job/
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1974260 2023-05-09T12:20:00Z 2023-05-09T12:20:01Z Power and Progress review – why the tech-equals-progress narrative must be challenged

“There are three things that need to be done by a modern progressive movement. First, the technology-equals-progress narrative has to be challenged and exposed for what it is: a convenient myth propagated by a huge industry and its acolytes in government, the media and (occasionally) academia. The second is the need to cultivate and foster countervailing powers – which critically should include civil society organisations, activists and contemporary versions of trade unions. And finally, there is a need for progressive, technically informed policy proposals, and the fostering of thinktanks and other institutions that can supply a steady flow of ideas about how digital technology can be repurposed for human flourishing rather than exclusively for private profit.”

Power and Progress review – why the tech-equals-progress narrative must be challenged
https://www.theguardian.com/books/2023/may/07/power-and-progress-daron-acemoglu-simon-johnson-review-formidable-demolition-of-the-technology-equals-progress-myth
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1974239 2023-05-09T09:45:47Z 2023-05-09T09:45:47Z What Really Made Geoffrey Hinton Into an AI Doomer
“Recent leaps in AI also conjure up utopian ideas. Hinton points to Ray Kurzweil, another AI pioneer now at Google. “Ray wants to be immortal,” he says. “Well, the good news is we’ve figured out how to make immortal beings, the bad news is it’s not for us. But can you imagine if all old white men hung around forever?””

What Really Made Geoffrey Hinton Into an AI Doomer
https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dangers/
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1971998 2023-05-03T06:05:07Z 2023-05-03T06:05:07Z Where will OpenAI, Midjourney, and StableDiffusion take AI by 2033?
https://www.fastcompany.com/90873422/ai-feels-like-a-magic-act-by-2033-it-will-be-a-horror-movie
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1971853 2023-05-02T21:24:52Z 2023-05-02T21:24:52Z There Is No A.I.
“There is also near-unanimity, I find, that the black-box nature of our current A.I. tools must end. The systems must be made more transparent. We need to get better at saying what is going on inside them and why. This won’t be easy. The problem is that the large-model A.I. systems we are talking about aren’t made of explicit ideas. There is no definite representation of what the system “wants,” no label for when it is doing a particular thing, like manipulating a person. There is only a giant ocean of jello—a vast mathematical mixing. A writers’-rights group has proposed that real human authors be paid in full when tools like GPT are used in the scriptwriting process; after all, the system is drawing on scripts that real people have made. But when we use A.I. to produce film clips, and potentially whole movies, there won’t necessarily be a screenwriting phase. A movie might be produced that appears to have a script, soundtrack, and so on, but it will have been calculated into existence as a whole.”

There Is No A.I.
https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1971250 2023-05-01T07:11:47Z 2023-05-01T07:11:48Z Yuval Noah Harari argues that AI has hacked the operating system of human civilisation
“Through its mastery of language, AI could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that AI has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the AI can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the AI chatbot LaMDA, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the AI chatbot. If AI can influence people to risk their jobs for it, what else could it induce them to do?”

Yuval Noah Harari argues that AI has hacked the operating system of human civilisation
https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1969685 2023-04-25T20:21:49Z 2023-04-25T20:21:49Z Everything we all wrote for the web is now being used to train AI
“Be smart: AI's hunger for training data casts the entire 30-year history of the popular internet in a new light.

Today's AI breakthroughs couldn't happen without the availability of the digital stockpiles and landfills of info, ideas and feelings that the internet prompted people to produce.
But we produced all that stuff for one another, not for AI.”

Everything we all wrote for the web is now being used to train AI
https://www.axios.com/2023/04/24/ai-chatgpt-blogs-web-writing-training-data
via Instapaper

]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1969083 2023-04-23T15:30:52Z 2023-04-23T15:30:52Z How AI could change computing, culture and the course of history
“The capacity to translate from one language to another includes, in principle and increasingly in practice, the ability to translate from language to code. A prompt written in English can in principle spur the production of a program that fulfils its requirements. Where browsers detached the user interface from the software application, LLMs are likely to dissolve both categories. This could mark a fundamental shift in both the way people use computers and the business models within which they do so.”

How AI could change computing, culture and the course of history
https://www.economist.com/essay/2023/04/20/how-ai-could-change-computing-culture-and-the-course-of-history
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1968848 2023-04-22T21:10:56Z 2023-04-22T21:10:57Z Google.gov — The New Atlantis
“Dreams of war between Google and government, however, obscure a much different relationship that may emerge between them — particularly between Google and progressive government. For eight years, Google and the Obama administration forged a uniquely close relationship. Their special bond is best ascribed not to the revolving door, although hundreds of meetings were held between the two; nor to crony capitalism, although hundreds of people have switched jobs from Google to the Obama administration or vice versa; nor to lobbying prowess, although Google is one of the top corporate lobbyists.

Rather, the ultimate source of the special bond between Google and the Obama White House — and modern progressive government more broadly — has been their common ethos. Both view society’s challenges today as social-engineering problems, whose resolutions depend mainly on facts and objective reasoning”

Google.gov — The New Atlantis
https://www.thenewatlantis.com/publications/googlegov
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1965909 2023-04-14T21:30:34Z 2023-04-14T21:30:35Z How Generative AI Could Disrupt Creative Work
“The “creator economy” is currently valued at around $14 billion per year. Enabled by new digital channels, independent writers, podcasters, artists, and musicians can connect with audiences directly to make their own incomes. Internet platforms such as Substack, Flipboard, and Steemit enable individuals not only to create content, but also become independent producers and brand managers of their work. While many kinds of work were being disrupted by new technologies, these platforms offered people new ways to make a living through human creativity.”

How Generative AI Could Disrupt Creative Work
https://hbr.org/2023/04/how-generative-ai-could-disrupt-creative-work
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1964145 2023-04-11T04:11:14Z 2023-04-11T04:11:16Z How AI like ChatGPT could change the future of work, education and our minds
“What are the best- and worst-case scenarios for generative AI’s integration in everyday life?

“The most optimistic scenarios are that AI is like the equivalent of the Industrial Revolution,” Reich said, creating “exponential increases in productivity in the economy. It accelerates scientific breakthroughs, medical breakthroughs. It frees up human beings from having to do drudgery work, liberates people from unpleasant tasks and massively increases the size of the economy.””

How AI like ChatGPT could change the future of work, education and our minds
https://www.sfchronicle.com/tech/article/ai-chatgpt-education-work-17846358.php
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1963351 2023-04-09T07:29:19Z 2023-04-09T07:30:21Z AI Video Generators Are Nearing a Crucial Tipping Point
“The rapid advances in generative AI may prove dangerous in an era when social media has been weaponized and deepfakes are propagandists' playthings. As Jason Parham wrote for WIRED this week, we also need to seriously consider how generative AI can recapture and repurpose ugly stereotypes”

AI Video Generators Are Nearing a Crucial Tipping Point
https://www.wired.com/story/ai-video-generators-are-nearing-a-crucial-tipping-point/
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1961407 2023-04-04T21:00:03Z 2023-04-04T21:00:04Z Why longtermism is the world’s most dangerous secular credo | Aeon Essays
“Such considerations have led many scholars to acknowledge that, as Stephen Hawking wrote in The Guardian in 2016, ‘we are at the most dangerous moment in the development of humanity.’ . And Max Tegmark contends that ‘it’s probably going to be within our lifetimes … that we’re either going to self-destruct or get our act together.’ Consistent with these dismal declarations, the Bulletin of the Atomic Scientists in 2020 set its iconic Doomsday Clock to a mere 100 seconds before midnight (or doom), the closest it’s been since the clock was created in 1947, and more than 11,000 scientists from around the world signed an article in 2020 stating ‘clearly and unequivocally that planet Earth is facing a climate emergency’, and without ‘an immense increase of scale in endeavours to conserve our biosphere [we risk] untold suffering due to the climate crisis.’ As the young climate activist Xiye Bastida summed up this existential mood in a Teen Vogue interview in 2019, the aim is to ‘make sure that we’re not the last generation’, because this now appears to be a very real possibility.”

Why longtermism is the world’s most dangerous secular credo | Aeon Essays
https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
via Instapaper
]]>
Gerd Leonhard
tag:www.thegoodfuture.net,2013:Post/1956693 2023-03-23T18:14:43Z 2023-03-23T18:14:44Z Opinion | What if climate change meant not doom — but abundance?

“What if we imagined “wealth” consisting not of the money we stuff into banks or the fossil-fuel-derived goods we pile up, but of joy, beauty, friendship, community, closeness to flourishing nature, to good food produced without abuse of labor? What if we were to think of wealth as security in our environments and societies, and as confidence in a viable future?

“Getting and spending, we lay waste our powers,” William Wordsworth wrote a couple of centuries ago. What would it mean to recover those powers, to be rich in time instead of stuff?”

Opinion | What if climate change meant not doom — but abundance?
https://www.washingtonpost.com/opinions/2023/03/15/rebecca-solnit-climate-change-wealth-abundance/
via Instapaper
]]>
Gerd Leonhard