Opinion | Why Does Everyone Feel So Insecure All the Time?

“As the British political theorist Mark Neocleous has noted, the modern word “insecurity” entered the English lexicon in the 17th century, just as our market-driven society was coming into being. All the while, manufactured insecurity encourages us to amass money and objects as surrogates for the kinds of security that cannot actually be commodified — connection, meaning, purpose, contentment, safety, self-esteem, dignity and respect — but which can only truly be found in community with others.”

Opinion | Why Does Everyone Feel So Insecure All the Time?
https://www.nytimes.com/2023/08/18/opinion/inequality-insecurity-economic-wealth.html
via Instapaper

The Great Disruption Has Begun — Paul Gilding

“We tend to think of climate’s economic impact narrowly, for example the cost of climate disasters, or the loss of value in the fossil fuel industry. This dramatically underestimates the breadth and depth of climate change’s economic implications. Increasing costs for infrastructure, disruptions to supply chains, inflationary impacts, higher insurance costs, sovereign debt risks, geopolitical instability, failed states, military conflict and so on.”

The Great Disruption Has Begun — Paul Gilding
https://www.paulgilding.com/cockatoo-chronicles/the-great-disruption-has-begun
via Instapaper

A mistake to ask software to improve our thinking

“In short: it is probably a mistake, in the end, to ask software to improve our thinking. Even if you can rescue your attention from the acid bath of the internet; even if you can gather the most interesting data and observations into the app of your choosing; even if you revisit that data from time to time — this will not be enough. It might not even be worth trying.

The reason, sadly, is that thinking takes place in your brain. And thinking is an active pursuit — one that often happens when you are spending long stretches of time staring into space, then writing a bit, and then staring into space a bit more. It’s here here that the connections are made and the insights are formed. And it is a process that stubbornly resists automation.”

Why note-taking apps don't make us smarter
https://www.platformer.news/p/why-note-taking-apps-dont-make-us?utm_medium=email
via Instapaper

The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly

“But we’re not necessarily on an inevitable journey toward disaster. Hinton suggests a technological approach that might mitigate an AI power play against humans: analog computing, just as you find in biology and as some engineers think future computers should operate. It was the last project Hinton worked on at Google. “It works for people,” he says. Taking an analog approach to AI would be less dangerous because each instance of analog hardware has some uniqueness, Hinton reasons. As with our own wet little minds, analog systems can’t so easily merge in a Skynet kind of hive intelligence.

“The idea is you don't make everything digital,” he says of the analog approach. “Because every piece of analog hardware is slightly different, you can't transfer weights from one analog model to another. So there's no efficient way of learning in many different copies of the same model. If you do get AGI [via analog computing], it’ll be much more like humans, and it won’t be able to absorb as much information as those digital models can.””

The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly
https://www.wired.com/story/plaintext-geoffrey-hinton-godfather-of-ai-future-ai/
via Instapaper

Why Silicon Valley billionaires are prepping for the apocalypse in New Zealand

“First you acquired land in New Zealand, with its rich resources and clean air, away from the chaos and ecological devastation gripping the rest of the world. Next you moved on to seasteading, the libertarian ideal of constructing manmade islands in international waters; on these floating utopian micro-states, wealthy tech innovators would be free to go about their business without interference from democratic governments. (Thiel was an early investor in, and advocate of, the seasteading movement, though his interest has waned in recent years.) Then you mined the moon for its ore and other resources, before moving on to colonise Mars. This last level of the game reflected the current preferred futurist fantasy, most famously advanced by Thiel’s former PayPal colleague Elon Musk, with his dream of fleeing a dying planet Earth for privately owned colonies on Mars.”

Why Silicon Valley billionaires are prepping for the apocalypse in New Zealand
http://www.theguardian.com/news/2018/feb/15/why-silicon-valley-billionaires-are-prepping-for-the-apocalypse-in-new-zealand
via Instapaper

I thought fossil fuel firms could change. I was wrong

“Let’s remember what the industry could and should be doing with those trillions of dollars: stepping away from any new oil and gas exploration, investing heavily into renewable energies and accelerating carbon capture and storage technologies to clean up existing fossil fuel use. Also, cutting methane emissions from the entire production line, abating emissions along their value chain and facilitating access to renewable energy for those still without electricity who number in their millions.

Instead, what we see is international oil companies cutting back, slowing down or, at best, painfully maintaining their decarbonisation commitments, paying higher dividends to shareholders, buying back more shares and – in some countries – lobbying governments to reverse clean energy policies while paying lip service to change.”

I thought fossil fuel firms could change. I was wrong
https://www.aljazeera.com/opinions/2023/7/6/i-thought-fossil-fuel-firms-could-change-i-was-wrong
via Instapaper

Silicon Valley’s Quest to Build God and Control Humanity

“This brings me back to my days in Bible school. God’s love, we were told, was not visible the way love from people around us was, but closer examination (and faith) would reveal its presence. By accepting the existence of God’s love, we could grow and develop such that our true potential, our destiny, our capacity to be a better child/sibling/friend/neighbor/lover/human would be realized. Andreessen’s hypothetical AI love is different from God’s love, of course, because with AI, you get the transformative effects of a god’s personal intervention as well as the affirmation of something that undeniably interacts with you.”

Silicon Valley’s Quest to Build God and Control Humanity
https://www.thenation.com/?post_type=article&p=451204
via Instapaper

How the generative A.I. boom could forever change online advertising


“Based on his limited experience with ChatGPT, the AI chatbot created by OpenAI, McKelvey said the technology fails to produce the kind of long-form content that companies could find useful as promotional copy.

“It can provide fairly generic content, pulling from information that’s already out there,” McKelvey said. “But there’s no distinctive voice or point of view, and while some tools claim to be able to learn your brand voice based on your prompts and your inputs, I haven’t seen that yet.””

How the generative A.I. boom could forever change online advertising
https://www.cnbc.com/2023/07/08/how-the-generative-ai-boom-could-forever-change-online-advertising.html
via Instapaper

Who runs the world?

“But if the digital space itself becomes the most important arena of great power competition, with the power of governments continuing to erode relative to the power of tech companies, then the digital order itself will become the dominant global order. If that happens, we’ll have a post-Westphalian world – a technopolar order dominated by tech companies as the central players in 21st-century geopolitics.”

Who runs the world?
https://www.gzeromedia.com/by-ian-bremmer/who-runs-the-world
via Instapaper

Contra Marc Andreessen on AI

“posted Marc’s question in a group chat, and just off the cuff, Tristan Hume, who works on interpretability alignment research at Anthropic, supplied the following list (edited for clarity):

I’d feel much better if we solved hallucinations and made models follow arbitrary rules in a way that nobody succeeded in red-teaming.

(in a way that wasn't just confusing the model into not understanding what it was doing).

I’d feel pretty good if we then further came up with and implemented a really good supervision setup that could also identify and disincentivize model misbehavior, to the extent where me playing as the AI couldn't get anything past the supervision. Plus evaluations that were really good at eliciting capabilities and showed smooth progress and only mildly superhuman abilities. And our datacenters were secure enough I didn't believe that I could personally hack any of the major AI companies if I tried.

I’d feel great if we solve interpretability to the extent where we can be confident there's no deception happening, or develop really good and clever deception evals, or come up with a strong theory of the training process and how it prevents deceptive solutions.”




Contra Marc Andreessen on AI
https://www.dwarkeshpatel.com/p/contra-marc-andreessen-on-ai?utm_term=0_e605619869-85042d89c9-%5BLIST_EMAIL_ID%5D
via Instapaper