Jump to content
IGNORED

AI


Lil

Recommended Posts

  • 2 years later...

Komentari? 

 

https://www.wired.com/story/researcher-says-ai-not-artificial-intelligent/

 

This Researcher Says AI Is Neither Artificial nor Intelligent

Kate Crawford, who holds positions at USC and Microsoft, says in a new book that even experts working on the technology misunderstand AI. 
 

Technology companies like to portray artificial intelligence as a precise and powerful tool for good. Kate Crawford says that mythology is flawed. In her book Atlas of AI, she visits a lithium mine, an Amazon warehouse, and a 19th-century phrenological skull archive to illustrate the natural resources, human sweat, and bad science underpinning some versions of the technology. Crawford, a professor at the University of Southern California and researcher at Microsoft, says many applications and side effects of AI are in urgent need of regulation.

Crawford recently discussed these issues with WIRED senior writer Tom Simonite. An edited transcript follows.

WIRED: Few people understand all the technical details of artificial intelligence. You argue that some experts working on the technology misunderstand AI more deeply.


KATE CRAWFORD: It is presented as this ethereal and objective way of making decisions, something that we can plug into everything from teaching kids to deciding who gets bail. But the name is deceptive: AI is neither artificial nor intelligent.

AI is made from vast amounts of natural resources, fuel, and human labor. And it's not intelligent in any kind of human intelligence way. It’s not able to discern things without extensive human training, and it has a completely different statistical logic for how meaning is made. Since the very beginning of AI back in 1956, we’ve made this terrible error, a sort of original sin of the field, to believe that minds are like computers and vice versa. We assume these things are an analog to human intelligence and nothing could be further from the truth.

You take on that myth by showing how AI is constructed. Like many industrial processes it turns out to be messy. Some machine learning systems are built with hastily collected data, which can cause problems like face recognition services more error prone on minorities.

We need to look at the nose to tail production of artificial intelligence. The seeds of the data problem were planted in the 1980s, when it became common to use data sets without close knowledge of what was inside, or concern for privacy. It was just “raw” material, reused across thousands of projects.

This evolved into an ideology of mass data extraction, but data isn’t an inert substance—it always brings a context and a politics. Sentences from Reddit will be different from those in kids’ books. Images from mugshot databases have different histories than those from the Oscars, but they are all used alike. This causes a host of problems downstream. In 2021, there's still no industry-wide standard to note what kinds of data are held in training sets, how it was acquired, or potential ethical issues.

You trace the roots of emotion recognition software to dubious science funded by the Department of Defense in the 1960s. A recent review of more than 1,000 research papers found no evidence a person’s emotions can be reliably inferred from their face.

Emotion detection represents the fantasy that technology will finally answer questions that we have about human nature that are not technical questions at all. This idea that’s so contested in the field of psychology made the jump into machine learning because it is a simple theory that fits the tools. Recording people's faces and correlating that to simple, predefined, emotional states works with machine learning—if you drop culture and context and that you might change the way you look and feel hundreds of times a day.

That also becomes a feedback loop: Because we have emotion detection tools, people say we want to apply it in schools and courtrooms and to catch potential shoplifters. Recently companies are using the pandemic as a pretext to use emotion recognition on kids in schools. This takes us back to the phrenological past, this belief that you detect character and personality from the face and the skull shape.

You contributed to recent growth in research into how AI can have undesirable effects. But that field is entangled with people and funding from the tech industry, which seeks to profit from AI. Google recently forced out two respected researchers on AI ethics, Timnit Gebru and Margaret Mitchell. Does industry involvement limit research questioning AI?

I can’t speak to what happened inside Google, but what I’ve seen is incredibly troubling. It’s important that we have researchers inside technology companies seeing how these systems work and publishing about it.

 
We’ve seen research focused too narrowly on technical fixes and narrow mathematical approaches to bias, rather than a wider-lensed view of how these systems integrate with complex and high stakes social institutions like criminal justice, education, and health care. I would love to see research focus less on questions of ethics and more on questions of power. These systems are being used by powerful interests who already represent the most privileged in the world.
 
Search our artificial intelligence database and discover stories by sector, tech, company, and more.

Is AI still useful?

Let's be clear: Statistical prediction is incredibly useful; so is an Excel spreadsheet. But it comes with its own logic, its own politics, its own ideologies that people are rarely made aware of.

And we’re relying on systems that don’t have the sort of safety rails you would expect for something so influential in everyday life. We have a regulation emergency: There are tools actually causing harm that are completely unregulated.

Do you see that changing soon?

We’re getting closer. We have Alondra Nelson in the White House Office of Science and Technology Policy, who has written about the fact that you cannot escape the politics of technology. And we’re starting to see a new coalition of activists and researchers seeing that the interrelatedness of capitalism and computation is core to climate justice, and labor rights, and racial justice. I’m optimistic.

  • +1 2
Link to comment

Sta iskomentarisati nego potvrditi da je vrlo iskreno i tacno to sto kaze. Nema tu nikakve inteligencije nalik ljudskoj vec jedna kompleksna automatizacija.

 

Takodje, mnogo masu loze tamo neki zaslepljeni tech gurui da je to neko "ludilo" dostignuce...mozda njima, ali do objektivno neke prave svesti nalik ljudskoj - od toga nema nista.

Link to comment

Moji utisci su veoma pomesani. U prirodnim naukama gde je eticki momenat manje (direktno) prisutan, AI (ili ovo sto se zove sada AI, tj. nabudzena statistika) s jedne strane daje neverovatan novi kvaliltet (recimo u smislu ubrzanja nekih metoda), a s druge cini da se sve vise istrazivaca oslanja na razne black-boxes koje koriste AI. S tim da to vise nisu black-boxes jer se radi o programima koje je neko drugi napisao na kriptican nacin, vec je sam mehanizam na koji je generisano to dodatno "znanje" netransparentan.

 

Konkretno, imam merenje koje je vraski komplikovano, zahteva mnogo podataka, mnogo truda da se ti podaci obrade, mnogo "masiranja" podataka i kreativnih algoritama i mnogo racunskog vremena. S druge strane, "AI" izvuce prakticno istu stvar (ili vise!) iz desetog dela podataka, za hiljaditi deo vremena (neracunajuci trening). I sad ne mozes da skines kapu pred tim, ali ostaje jedna duboka nelagoda da AI ne moze da pronadje zapravo nista sto na neki nacin nije vec uskladisteno u trening uzorku. Pored toga, nelagodu izaziva i metodoloska nejasnost kako odrediti gresku rezultata dobijenog tim putem. U standardnim metodama, mnogi se muce sa metodologijom odredjivanja greske u modelu, ovde je to, na neki nacin, cini mi se, gurnuto pod tepih.

 

Ne znam... U ovim potencijalno nezgodnijim oblastima fokus je na eticnoj strani, ali mene plasi kakve sve pogresne zakljucke moze da izvuce istrazivac koji ne razume metod (a cesto ni osnove na kojima metod pociva).

Link to comment
42 minutes ago, chandra said:

Moji utisci su veoma pomesani. U prirodnim naukama gde je eticki momenat manje (direktno) prisutan, AI (ili ovo sto se zove sada AI, tj. nabudzena statistika) s jedne strane daje neverovatan novi kvaliltet (recimo u smislu ubrzanja nekih metoda), a s druge cini da se sve vise istrazivaca oslanja na razne black-boxes koje koriste AI. S tim da to vise nisu black-boxes jer se radi o programima koje je neko drugi napisao na kriptican nacin, vec je sam mehanizam na koji je generisano to dodatno "znanje" netransparentan.

 

Konkretno, imam merenje koje je vraski komplikovano, zahteva mnogo podataka, mnogo truda da se ti podaci obrade, mnogo "masiranja" podataka i kreativnih algoritama i mnogo racunskog vremena. S druge strane, "AI" izvuce prakticno istu stvar (ili vise!) iz desetog dela podataka, za hiljaditi deo vremena (neracunajuci trening). I sad ne mozes da skines kapu pred tim, ali ostaje jedna duboka nelagoda da AI ne moze da pronadje zapravo nista sto na neki nacin nije vec uskladisteno u trening uzorku. Pored toga, nelagodu izaziva i metodoloska nejasnost kako odrediti gresku rezultata dobijenog tim putem. U standardnim metodama, mnogi se muce sa metodologijom odredjivanja greske u modelu, ovde je to, na neki nacin, cini mi se, gurnuto pod tepih.

 

Ne znam... U ovim potencijalno nezgodnijim oblastima fokus je na eticnoj strani, ali mene plasi kakve sve pogresne zakljucke moze da izvuce istrazivac koji ne razume metod (a cesto ni osnove na kojima metod pociva).

 

Da li to sto pises ima veze i sa "black swan" dogadjajima?

Ono, finansijska kriza 2008. i koriscenje "black box" finansijskih modela na osnovu istorijskih korelacija.

 

 

 

 

Link to comment
2 hours ago, Budja said:

 

Da li to sto pises ima veze i sa "black swan" dogadjajima?

Ono, finansijska kriza 2008. i koriscenje "black box" finansijskih modela na osnovu istorijskih korelacija.

 

 

 

 

 

Ne, ne znam za to. Moze referenca?

 

U mom poslu se AI trenira na teorijskim modelima. Prepoznace nesto (recimo nekakvu "korelaciju") u eksperimentu, u principu, samo ako to nesto vec postoji u teorijskom modelu iako ga ti u modelu nisi primetio. Ali ako je to nesto iz bilo kog razloga pogresno u modelu (pogresna pretpostavka, neprecizni slobodni parametri), AI nema nacina da prevazidje tu manjkavost. Naprotiv. Ako je su u modelu ti nedostaci (manje ili vise lako) uocljivi, rezultat AI ih je sazvakao i svario sa svom kvalitetnom informacijom i ispljunuo u obliku rezultata koji deluje impresivno i nema vise ni pomena onih problema od kojih je model patio.

 

Jedan primer je super rezolucija slike gde se AI koristi da pogodi sadrzaj unutar "piksela".

Link to comment
3 hours ago, chandra said:

 

Ne, ne znam za to. Moze referenca?

 

U mom poslu se AI trenira na teorijskim modelima. Prepoznace nesto (recimo nekakvu "korelaciju") u eksperimentu, u principu, samo ako to nesto vec postoji u teorijskom modelu iako ga ti u modelu nisi primetio. Ali ako je to nesto iz bilo kog razloga pogresno u modelu (pogresna pretpostavka, neprecizni slobodni parametri), AI nema nacina da prevazidje tu manjkavost. Naprotiv. Ako je su u modelu ti nedostaci (manje ili vise lako) uocljivi, rezultat AI ih je sazvakao i svario sa svom kvalitetnom informacijom i ispljunuo u obliku rezultata koji deluje impresivno i nema vise ni pomena onih problema od kojih je model patio.

 

Jedan primer je super rezolucija slike gde se AI koristi da pogodi sadrzaj unutar "piksela".

 

Na osnovu istorijski verovatnoca i korelacija izmedju razlicitih individualnih aktiva su se vrsile procene rizika i diversifikacija portfolioa. Medjutim, desio se "black swan" dogadjaj (pozitivna korelacija vise aktiva u smislu rizika a koji je do tada bio potcenjen) i odose modeli u kanal.

 

 

Link to comment
1 hour ago, Budja said:

 

Na osnovu istorijski verovatnoca i korelacija izmedju razlicitih individualnih aktiva su se vrsile procene rizika i diversifikacija portfolioa. Medjutim, desio se "black swan" dogadjaj (pozitivna korelacija vise aktiva u smislu rizika a koji je do tada bio potcenjen) i odose modeli u kanal.

 

 

 

Pa to ja vidim kao najveci problem. Ti u modele ubacis neko znanje. Onda to znanje izmiksas u blenderu deep learninga (tako da ga vise ni sam ne mozes prepoznati) i kao rezultat ocekujes da dobijes neko novo znanje. Ja sam tu bas konzervativan. Ako je nesto "znanje" mora da postoji potpuno razumevanje procesa, gresaka, ogranicenja pocetnih pretpostavki, i tako dalje. Ima mnogo istrazivaca koji su i do sada imali svakakve black magic momente u svojim radovima. Sada se to nekako legitimizuje.

 

Meni je ovo jako lepo receno -

Quote

It is presented as this ethereal and objective way of making decisions, something that we can plug into everything from teaching kids to deciding who gets bail. But the name is deceptive: AI is neither artificial nor intelligent.

AI is made from vast amounts of natural resources, fuel, and human labor. And it's not intelligent in any kind of human intelligence way. It’s not able to discern things without extensive human training, and it has a completely different statistical logic for how meaning is made. Since the very beginning of AI back in 1956, we’ve made this terrible error, a sort of original sin of the field, to believe that minds are like computers and vice versa. We assume these things are an analog to human intelligence and nothing could be further from the truth.

 

Link to comment
5 hours ago, chandra said:

 

Pa to ja vidim kao najveci problem. Ti u modele ubacis neko znanje. Onda to znanje izmiksas u blenderu deep learninga (tako da ga vise ni sam ne mozes prepoznati) i kao rezultat ocekujes da dobijes neko novo znanje. Ja sam tu bas konzervativan. Ako je nesto "znanje" mora da postoji potpuno razumevanje procesa, gresaka, ogranicenja pocetnih pretpostavki, i tako dalje. Ima mnogo istrazivaca koji su i do sada imali svakakve black magic momente u svojim radovima. Sada se to nekako legitimizuje.

 

Meni je ovo jako lepo receno -

 

 

Sad, ja se u te AI modele slabo razumem.

Pretpostavljam da neki algoritmi prepoznaju greske, i vide neka ogranicenja i na osnovu toga odlucuju.

No, uvek fali zavrsni "judgement". Ono iskustvo i osecaj da krajnjem rezultatu nesto fali, nije believable, pa onda na osnovu toga radis reverse engineering da vidis gde ne fercera logicki i tehnicki. Ne znam  da li to masine rade.

 

Profesionalci po bankama koji donose odluke, u quant fondovima narocito, ne. Bolje de okrivis program  i nekog tamo finansijskog inzenjera nego da sam snosis posledice.

Edited by Budja
Link to comment
6 hours ago, Budja said:

 

Sad, ja se u te AI modele slabo razumem.

Pretpostavljam da neki algoritmi prepoznaju greske, i vide neka ogranicenja i na osnovu toga odlucuju.

No, uvek fali zavrsni "judgement". Ono iskustvo i osecaj da krajnjem rezultatu nesto fali, nije believable, pa onda na osnovu toga radis reverse engineering da vidis gde ne fercera logicki i tehnicki. Ne znam  da li to masine rade.

 

Profesionalci po bankama koji donose odluke, u quant fondovima narocito, ne. Bolje de okrivis program  i nekog tamo finansijskog inzenjera nego da sam snosis posledice.

 

Neke "greske" da, ali neke jednostavno ne mogu da otkriju jer nemaju nacina da znaju da su greske. To je mozda u ekonomskim ili bankarskim vodama manje kriticno, ali kod mene je esencijalno.

 

Banalan primer, napravis model nekakve tecnosti koji ima samo dve faze, tecnu i gasovitu. I model klime koji negde duboko u sebi sadrzi taj model tecnosti. I treniras AI na tom modelu. AI nista ne sprecava da predvidi kisu na minus pedeset stepeni celzijusa ili ko zna kakve posledice te kise - jednostavno ne zna da je model na kojem je treniran neprimenjim na taj opseg parametara (iako je, recimo, savrseno tacan za neku tropsku klimu) i nema nikakvog nacina da o tome nauci. Drugim recima, on procesira znanje koje mu je dato, izvlaci kauzalnosti iz tog znanja (sto moze da bude itekako vredno!), ali nema mogucnost da doda fundamentalno znanje.

 

Pisem u nekoj pauzi, izvini ako je zbrkano.

Link to comment
  • 3 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...