Jump to content
IGNORED

AI


Lil

Recommended Posts

Posted

Kaže World's First Ai-Generated News and Views:

 

444.png

 

Simpatično. :happy:

Posted

iNtElLigeNcE

 

startswith-K.png

 

A sa tih izvora napaja se i sveznajući Googel

 

startswith-K2.png

 

 

  • Haha 2
  • 3 weeks later...
Posted

Korisnicima Meta/FB/Insta/Threads može biti od koristi da popune ovaj formular... u optimističkom scenariju da to neko tamo uopšte zarezuje.

 

Screenshot-2023-09-01-194256.png

  • 4 weeks later...
Posted

Par komentara od Free Software Foundation vrača Richarda Stallmana...

 

Quote

8 September 2023 (AI programs as friends and counselors)

A founder of DeepMind says that AI programs will be good friends and counselors for human beings.

Is your idea of a good friend and counselor one that reports on everything about you to manipulative large companies and governments too? Not mine!

It is interesting that the interviewer presumes that AI-driven cars will be "autonomous" and that they will drive better than the ones humans drive. They are not at all autonomous — they depend on internet connections — and San Francisco already knows how badly they mess up in driving.

Perhaps they will drive well someday, when they understand as much about driving as humans do. But that is beyond today's technology.

 

 

Quote

8 September 2023 (AI-controlled armed drones)

The US air-force is testing versions of future AI-controlled armed drones to be used in air combat alongside human fighter pilots.

I hope the article errs in saying that this AI is similar to the erroneously labeled "AI" that operates today's bullshit generator language models, because if that is accurate we have no telling who it might fire at.

One of the dangers of AI-controlled devices that can do things with real physical effects — even if not armed — is that it may make clever inventions that could be fatal in ways that the AI would not understand, and that no human would be asked to check. This AI is already demonstrating that sort of creativity which is also potential danger. I recommend reading the book The Two Faces of Tomorrow, by James Hogan. See libgen.rs.

 

  • 2 weeks later...
Posted (edited)

Attention is all you need iz 2017. je groundbreaking paper koji predstavlja svojevrsnu revoluciju u AI

 

Najbolje objašnjenje na netu šta je Attention mehanizam koji je jezgro LLM:

 

 

Ako damo sebi malo pesničke slobode, reči, rečenice, pasusi i tekstovi u velikim jezičkim modelima (LLM) se predstavljaju kao konstelacije kosmičkih tela u nekoj jezičkoj "galaksiji", hiperprostoru sa ogromnim brojem dimenzija, a dejstvo Attention mehanizma i Transformer modela je slično sili gravitacije koje deluje na ta tela i koja grupiše reči u svojevrsna "sazvežđa" i "planetarne sisteme" u okviru galaksije. 

Edited by slow
Posted (edited)

Sada je ChatGPT dobio zadatak da proširi analogiju:

 

Quote

 

Naravno! Evo proširene analogije sa gravitacijom kako bismo još bolje razumeli Attention mehanizam u kontekstu jezičkih modela:

Možemo zamisliti jezičke modele kao kosmički sistem u kojem reči i njihovi odnosi predstavljaju planete i zvezde, dok Attention mehanizam deluje kao sila gravitacije koja utiče na njihovu međusobnu interakciju.

Svaka reč u tekstu ima svoju gravitacionu masu, koja je proporcionalna njenom značaju u kontekstu. Reči koje su ključne za razumevanje konteksta imaju veću gravitacionu masu, dok one manje bitne imaju manju masu. Kada je jezički model izložen tekstu, Attention mehanizam deluje poput gravitacione sile koja privlači reči jednu prema drugoj na osnovu njihove mase.

Ove gravitacione sile modeluju interakcije između reči u tekstu. Reči koje su semantički slične ili međusobno povezane imaju jaku gravitacionu privlačnost i grupišu se zajedno, formirajući "sazvežđa" ili "planetarne sisteme" u okviru jezičke "galaksije". Ova grupisanja reči omogućavaju modelu da bolje razume kontekst i generiše odgovarajući tekst.

Slično kao u kosmičkom sistemu, gde planete i zvezde formiraju stabilne orbite oko svojih centara, reči u jezičkom modelu formiraju stabilne i korektne sekvence. Gravitaciona privlačnost koju ostvaruje Attention mehanizam pomaže modelu da organizuje reči u logičke sekvence i generiše tekst koji ima smisao.

Dodatno, kao što u kosmosu postoje različite jačine gravitacije koje deluju na objekte u različitim udaljenostima, Attention mehanizam takođe može prilagoditi jačinu privlačnosti između reči na osnovu njihove udaljenosti u tekstu. Na taj način, reči koje su bliže jedna drugoj mogu imati jaču gravitacionu privlačnost i tesnije se grupisati, dok reči koje su udaljenije imaju manju privlačnost.

Ova analogija sa gravitacijom pomaže nam da vizuelizujemo kako Attention mehanizam organizuje reči u jezičkim modelima, omogućavajući im da bolje razumeju kontekst i generišu korektne tekstualne sekvence.

 

 

Edited by slow
Posted

 

@[email protected]: Gotta consider how you may be training AI tools for your job just by performing your everyday work.

 

That is: Your employer forces you to use AI tools. Said AI tools are simultaneously trained on your work as you are using them. At some point the tool simply replaces you.

"Enter corporate spyware, invasive monitoring apps that allow bosses to keep close tabs on everything their employees are doing—collecting reams of data that could come into play here in interesting ways. Corporations, which are monitoring their employees on a large scale, are now having workers utilize AI tools more frequently, and many questions remain regarding how the many AI tools that are currently being developed are being trained.

 

Put all of this together and there’s the potential that companies could use data they’ve harvested from workers—by monitoring them and having them interact with AI that can learn from them—to develop new AI programs that could actually replace them. If your boss can figure out exactly how you do your job, and an AI program is learning from the data you’re producing, then eventually your boss might be able to just have the program do the job instead."

 

wired.com

 

esc.webp

Your Boss’s Spyware Could Train AI to Replace You

 

Quote

You’ve probably heard the story: A young buck comes into a new job full of confidence, and the weathered older worker has to show them the ropes—only to find out they’ll be unemployed once the new employee is up to speed. This has been happening among humans for a long time—but it may soon start happening between humans and artificial intelligence.

 

 

Quote

Countless headlines over the years have warned that automation isn’t just coming for blue-collar jobs, but that AI would threaten scores of white-collar jobs as well. AI tools are becoming capable of automating tasks and sometimes entire jobs in the corporate world, especially when those jobs are repetitive and rely on processing data. This could affect everyone from workers at banks and insurance companies to paralegals and beyond.

 

Carl Frey, an economist at Oxford University, coauthored a landmark study in 2013 that claimed AI could threaten nearly 50 percent of US jobs in the coming decades. Frey says that he doesn’t think new AI tools like ChatGPT are going to automate jobs in this way because they still require human involvement and are often unreliable. Still, many of the underlying factors that were outlined in that paper remain pertinent today. Considering the rapid pace at which AI is advancing, it’s hard to predict how it could soon be utilized and what it will be capable of.

 

Then there’s the issue of how it’s being incorporated into daily work and how it’s being trained. Enter corporate spyware, invasive monitoring apps that allow bosses to keep close tabs on everything their employees are doing—collecting reams of data that could come into play here in interesting ways. Corporations, which are monitoring their employees on a large scale, are now having workers utilize AI tools more frequently, and many questions remain regarding how the many AI tools that are currently being developed are being trained.

 

Put all of this together and there’s the potential that companies could use data they’ve harvested from workers—by monitoring them and having them interact with AI that can learn from them—to develop new AI programs that could actually replace them. If your boss can figure out exactly how you do your job, and an AI program is learning from the data you’re producing, then eventually your boss might be able to just have the program do the job instead.

 

“When it comes to monitoring workflows, I do think that’s going to be a way we automate a lot of this stuff,” Frey says. “What you might be able to do is take some of those foundational models and train them on some of the data you have internally and fine-tune them, or you could train a model from scratch just with your internal data.”

 

David Autor, a professor of economics at MIT, says he also thinks AI could be trained in this way. While there is a lot of employee surveillance happening in the corporate world, and some of the data that’s collected from it could be used to help train AI programs, simply learning from how people are interacting with AI tools throughout the workday could help train those programs to replace workers.

 

“They will learn from the workflow in which they’re engaged,” Autor says. “Often people will be in the process of working with a tool, and the tool will be learning from that interaction.”

 

Whether you’re training an AI tool directly by interacting with it throughout the day, or the data you’re producing while you work is simply being used to create an AI program that can do the work you’re doing, there are multiple ways in which a worker could inadvertently end up training an AI program to replace them. Even if the program doesn’t end up being incredibly effective, a lot of companies might be happy with an AI program that’s good enough because it doesn’t require a salary and benefits.

 

“I think there are a lot of discretionary white-collar jobs where you’re kind of using a mixture of hard information and soft information and trying to make advanced decisions,” Autor says. “People aren’t that good at that, machines aren’t that good at that, but probably machines can be pretty much as good as people.”

 

Autor says he doesn’t see a “labor market apocalypse” coming. Many workers won’t be entirely replaced but will simply have their jobs changed by AI, Autor says, while some workers will certainly be made redundant by advancements in AI. The problem there, he says, is what happens to those workers after they’re no longer able to find a well-paying job with the education and skill sets they have.

 

“It’s not that we’re going to run out of work. It’s much more that people are doing something they’re good at, and that thing goes away. And then they end up doing a kind of generic activity that everybody’s good at, which means it pays very little—food service, cleaning, security, vehicle driving,” Autor says. “These are low-paying activities.”

 

Once someone’s automated out of a well-paying job, they can end up slipping through the cracks. Autor says we’ve seen this happen in the past.

 

“The hollowing out of manufacturing and office work over the past 40 years has definitely put downward pressure on the wages of people who would do that type of work, and it’s not because they’re doing it now at a lower rate of pay. It’s because they’re not doing it,” Autor says.

 

Frey says politicians will need to offer solutions to those who fall through the cracks to prevent the destabilization of the economy and society. That would likely include offering social safety net programs to those affected. Frey has written extensively on the effects of the first Industrial Revolution, and he says there are lessons to be learned there. In Britain, for example, there was a program called the Poor Laws, where people who were harmed by automation were given financial relief.

 

“What you see back then is a lot of social unrest. Wages are stagnant or falling for a large part of the population. You have riots,” Frey says. “If you look at the places where the Poor Laws were more generous, there was less social unrest and less upheaval. Using welfare systems to compensate people who lose out is something we’ve done for a long time and should continue to do.”

 

Many people would also benefit from being retrained for other work, but Autor says the US has never been very good at retraining people, so there’d have to be some work done to create effective retraining programs. He says technology might actually be able to help there because people could be retrained using helpful new digital tools.

 

There was a lot of hype surrounding ChatGPT and similar AI tools when they came out. That hype has since died down a bit, suggesting to some that maybe these tools won’t be as useful as they were promised to be. Perhaps they won’t be taking everybody’s jobs. However, at the rate at which AI is advancing, there’s no saying where things could be in five to 10 years—or even next year.

 

Vincent Conitzer, a professor of computer science at Carnegie Mellon University, says people shouldn’t underestimate what these AI tools may soon be capable of. They may be somewhat limited in their use now, but that could change relatively rapidly and end up being as disruptive as some have warned it could be.

 

“I worry about this being a ‘boiling frog’ kind of scenario, where we see amazing advances in AI but then don’t immediately see them take over people’s jobs, and [people] conclude there wasn’t all that much to worry about, and we accept the new technology as the new normal but not all that impressive after all,” Conitzer says. “Meanwhile, gradually but quickly, the world and the job market do adjust to the new technologies in complex ways, and at some point we realize large societal problems have emerged.”

 

Posted

OK, taj spyware be trebalo da se izbaci iz etickih razloga no to na stranu... znas i sam odakle je potekao izraz ludak?

 

luddites-loom-300x264.jpg

 

Nije problem sto ce nas posao zameniti masine, problem je sto kapitalizam gleda ljude kao samo orudje za rad, umesto da imamo sistem gde svi dobijamo benefite od vece produktivnosti i manje potrebe za radom.

 

 

Posted

Što se tiče luditizma, tu je istina dosta drugačija od onoga što većina verovatno misli. (Ima veze sa poslednjih dvadesetak reči tvog posta.)

 

Quote

 

... the Luddites were anything but technophobes: they were skilled high-tech workers whose seven-year apprenticeships were the equivalent to getting a Master's in Engineering from MIT. Their objection to powered textile machines had nothing to do with fear of the machines: rather, it was motivated by a clear-eyed understanding of how factory owners wanted to use the machines.

 

The point of powered textile machines wasn't to increase the productivity of skilled textile workers – rather, it was to smash the guilds that represented these skilled workers and ensured that they shared in the profits from their labor. The factory owners wanted machines so simple a child could use them – because they were picking over England's orphanages and recruiting small children through trickery to a ten-year indenture in the factories.

 

The "dark, Satanic mills" of the industrial revolution were awash in the blood and tears of children. These child-slaves were beaten and starved, working long hours on little sleep for endless years, moving among machines that could snatch off a limb, a scalp, even your head, after a moment's lapse in attention.

 

Preneto odavde. Komentar je na knjigu Briana Merchanta "Blood in the Machine: The Origins of the Rebellion Against Big Tech"

  • +1 1
  • Hvala 1
  • Tužno 1
Posted

Da li je jeziva perverzija da kupim ovu knjigu preko Amazona kao e-knjigu?

Posted

Mogu da prosledim kome treba (na PM). U svakom slučaju, treba zaobići amazon ako se može. :)

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...