The three rules of technology according to Tristan Harris and Aza Raskin at the Center for Humane technology presentation ‘The AI Delima’, When you invent a new technology you uncover a new class of responsibilty. If that technology confurs power it will start a race. If you do not coordinate, it will end in tragedy.
AMD, MICROSOFT, AI AND STONKS
It’s becoming clear that AI is beginning to find its place in all industries. This is eliciting all types of reactions. There are full doomers worried AI will soon take over the world and there are those that recognize AI as a useful tool to be used in their industry. Of course there is everyone between and those without an opinion. One thing is certain, however AI isn’t going anywhere and it is disrupting industries.
We’re seeing AI write code, trade stock, make copy, practice medicine and practice law. Okay, I might be exaggerating, but it is a powerful tool that can help in these areas among others.
Rumors have been circulating about an AMD Collaboration with Microsoft to create a chip dedicated to Microsoft’s AI. This led to a boost in Amd’s stock true or not.
Doomers Gonna Doom
I was originally writing this to talk about how even mere rumors of AI collaborations were changing investor perception, which I’m still touching on, but upon doing research I came across another interesting topic. In an article written by John Naughton, “‘A race it might be impossible to stop’: how worried should we be about AI’”, he presents an argument implemented by Geoffrey Hinton that essentially states that as these few large corporations race to implement AI, they will become increasingly less cautious allowing AI to spiral out of human control.
Geoffrey Hinton is one of the main contributors to the creation of deep learning, neural networks and AI or whatever else you want to call it. He recently left Google, supposedly to be able to speak more freely about his concerns with AI.
As we know, others have spoken out about their concerns with AI. Elon Musk has expressed his reservations on several occasions and urged us to start creating policy regulating it before it’s too late. Of course, shortly after making these statements, began working on his own iterations of generative AI.
They’re Gonna Take Our Jobs
It’s true, AI will be taking jobs, but that is a tale as old as time or at least as old as the industrial revolution. There will always be new technologies (tools) that change the way people interact with the world and we will be forced to adapt.
The reality is at least in the short term jobs will change, where a doctor might not be as good at seeing and diagnosing cancer as an AI is. The doctor still has something that AI doesn’t have: interpersonal skills. AI can write reports and copy, but they can’t deliver them in a meaningful way. No sales person is going to be motivated by an unemotional creature, and no buyer is going to be motivated to make a purchase by one.
Plumbers, carpenters and most trades have nothing to fear, because though AI may be able to solve the problems you do it can’t go where you go and solve those problems in the dynamic way that is necessary, the way that only a human can do, for now at least.
TheFour Hallucinations of tech CEOs
Who knows what Rene DeCartes would think of this, but Naomi Klein makes a case that we are being fed a line when it comes to the promises of AI in her article “AI machines aren’t ‘hallucinating’. But their makers are”. She outlines four claims made about AI that don’t hold water in her opinion. I’m obliged to agree. Those four things are the climate crisis, contributions to a wise government, trusting large tech corps to do the right thing and AI ending all the boring work.
She outlines an argument that discusses why we don’t need AI to solve climate change, that the hard part isn’t knowing what to do about it, but rather taking the actions that experts and scientists have been purporting for years or even decades.
Also that AI can’t help with governance because governments don’t do the right thing, almost for the same reason we don’t do anything about climate change. The reason being that it’s hard while money and lobbying from those with it have a lot of leverage over the system due to their financial influence and power.
Klein points out that the tech giants are already responsible for the largest invasion in privacy in human history as well as perpetrators of theft through their large language models, arguing that feeding a machine artwork and having it replicate it is akin to copyright theft.
Lastly, the idea that AI will end ‘boring’ work is a farce targeting those that believe we either are or will be living in some utopian socialist reality, when in fact we are still under the thumb of a capitalist system that just doesn’t work that way. It won’t start to work that way just because of a new “tool”.
AI Will Change the World
I don’t think it’s controversial to say AI is going to change the world. The controversy is in whether it will be for the good or for the downfall of humanity. In the short term at least I believe we will see a mix of positive and negative things come from the use of AI. As the impact of AI increases we will draw closer to the advent of general AI, not in a singular step. There isn’t a day we will be able to pinpoint like on terminator when skynet goes sentient. The dangers and the power of AI will continue to grow throughout our lifetimes and we will witness it unfold before us.
It’s possible it could be a massive benefit to society, but if it remains in the control of big tech corporations competing for dominance the urges to use AI irresponsibly will increase to maintain their grasp on power. Will we see coordination or will the race end in tragedy? Only time will tell.