Jump to content
IGNORED

ChatGPT


Vapad

Recommended Posts

Posted

btw Čomski je svojevremeno  (kad su se pojavili prvi jezički modeli bazirani na frekvenciji pojavljivanja reči i grupa reči o kojima sam isto pisala) rekao nešto  "probability of a sentence is an entirely useless concept, under any known interpretation of this term", tako da ne bih se ja zbog njegovih izjava preterano uzbuđivala

 

 

što nikako ne znači da jezički modeli razumeju svet, stvaraju novo ni iz čega, imaju sopstvene stavove i osećanja

 

 

(dakle, ništa ekstremno)

Posted

 

1 hour ago, Spooky said:

Ne slažu se mnogi oko Čomskog i sličnih teorija da je jezik evolutivno upisan u naš genetički kod, ali da ga opet ne kritikujem ja evo članka
https://www.scientificamerican.com/article/evidence-rebuts-chomsky-s-theory-of-language-learning/

 

Tekst opovrgava Čomskijevu ideju univerzalne gramatike ljudskih jezika, nisam kompetentan da ga ocenim ali upadljivo je da veze nema sa evolucijom. Ne znam da li si iskopirao pogrešan link ili samo guglaš "Noam Chomsky wrong" pa prenosiš prvi rezultat koji ti iskoči.

 

1 hour ago, Spooky said:

Zapravo, evolutivno teško da je bilo šta iz poslednjih 50 000-100 000 godina razvoja evolutivno razvijeno. Evuluativno razvijeno imamo motoričke funcije, procesiju čula, generalne sposobnosti mozga, ali pismo, naučni metod, inžinjerstvo i zidanje na već izgrađenog u svemu tome i nauci nema veze sa evolucijom. Da ne spominjem koliko je naš jezik kompleksniji od jezika lovaca-huntera od pre 20000 godina. Svako ko je imao prilike da priča sa neobrazovanim, iako nekad pametnim ljudima koji nisu imali obrazovanje, pismenost i šanse da dolaze u susret sa puno različitih ljudi šireg rečnika zna da je jezik takvih ljudi često dosta ograničeniji. Zamislite onda lovce-sakupljače i njihov jezik.

 

Nepodnošljiva nabeđenost modernog čoveka, opsesivno uslovljenog da sebe zamišlja kako sedi na vrhu neke piramide :rolleyes:

 

Naš jezik je verovatno nešto kompleksniji od hunter-gatherer jezika od pre sto hiljada godina, to je nemoguće opovrgnuti ili dokazati. Ali jezik homo sapiensa u bilo kojoj fazi razvoja je neuporedivo kompleksniji od "jezika" kojim komuniciraju orangutani, nosorozi, krtice, ili kljunari! To je lako dokazivo i praktično neosporno.

 

To su fundamentalne razlike u nivou kompleksnosti. To je evolutivni razvoj jezika o kom ovde pričamo, puki pokazatelj evolutivnog razvoja ljudskog mozga, manifestacija sporog ali neumitnog organskog usložnjavanja svih živih bića.

 

1 hour ago, Spooky said:

Mi smo po mnogim stvarima naučnim/inženjerskim pristupom nadmašili evoluciju - letimo više nego ptice, brže nego ptice, krećemo se brže nego najbrže životinje i sl. Kompjuteri su u brzini aritmetičkih radnji odavno nadmašili ljude, i zbog toga ih u mnogim oblastima toliko i koristimo. Zašto je nemoguće zamisliti da možemo da nadmašimo evoluciju sličnim pristupom i po dubini intelekta....

 

Čovekova sposobnost da dizajnira ili sagradi avion nije "nadmašila evoluciju", ona je direktna posledica evolucije. 

 

Naš intelekt je produkt evolucije, on nam omogućava da prevaziđemo razna biološka ograničenja koja inače fizički ne bismo mogli. (Omogućava nam i razne druge stvari, ali to na stranu.)

 

Ali sposobnost intelekta da pojmi samog sebe je ekstremno upitna. Meni se osnovne ideje transhumanizma sviđaju, rado bih doživeo tehnološki singularitet posle kog bi sve bilo drugačije, naša svest postala efektivno neograničena, nesputana biološkim ograničenjima i entropijskom konačnošću bića...

 

Da li će se dotle ikad stići, niko ne može pouzdano da proceni. (Ja nažalost sumnjam.) Ali sasvim je legitimno reći da pametni chatbots koji nas oponašaju i govore nam ono što želimo da čujemo nisu korak u tom pravcu. Da bi vrlo lako mogli da nas vode u evolutivni ćorsokak.

 

Posted
14 minutes ago, Weenie Pooh said:

Tekst opovrgava Čomskijevu ideju univerzalne gramatike ljudskih jezika, nisam kompetentan da ga ocenim ali upadljivo je da veze nema sa evolucijom. Ne znam da li si iskopirao pogrešan link ili samo guglaš "Noam Chomsky wrong" pa prenosiš prvi rezultat koji ti iskoči.

Itekako ima veze. Ideja koja stoji iza univerzalne gramatike i koja je potakla Čomskog da radi na univerzalnoj gramatici je da se rađamo sa već ugrađenim delom mozga za jezik odnosno gramatiku, do kojeg smo došli evolucijom. Tj. da deca brzo uče jezik jer su evoluirali da ga uče brzo, kao što lane hoda par minuta po rođenju jer je evoluiralo da hoda, pa ne mora da uči da hoda i iz toga proizilazi neka opšta-univerzalna gramatika koja je u hardveru, a ne u softveru koji naučimo posle rođenja.

 

Univerzalna gramatika se prvo uobličila za evropske jezike (inače vrlo srodne), ali kako su razmatrali više jezika, tako su opadale teze univerzalne gramatike sve dok se nije totalno raspala jer koje god pravilo da su smisli kao univerzalno, pokazalo se da postoji jezik koji ga ne poštuje. Pa onda pokušajima sa nekim featurima koje naš mozak može da enabluje ili disabluje dok uči jezik, ali i kategorizacija toga se pokazala nemogućom.

 

Mislim da mnogi ljudi usko posmatraju jezike. Meni je bilo fascinanto da čitam o gramatikama znakovnih jezika, kako su se neki razvijali totalno prirodno od dna od strane gluvih ljudi (https://en.wikipedia.org/wiki/Deaf-community_sign_language) i kako evoluiraju vremenom slično kao i vokalni jezici. A nekako mi teško zamisliti toliko opštu strukturu u mozgu imamo strukturu koja nam je omogućila i sve verbalne i sve znakovne jezike i eventualno druge forme komunikacije, a opet toliko specifičnu da kažemo - to je evolutivno razvijen dio mozga za jezike.

 

26 minutes ago, Weenie Pooh said:

Da li će se dotle ikad stići, niko ne može pouzdano da proceni. (Ja nažalost sumnjam.) Ali sasvim je legitimno reći da pametni chatbots koji nas oponašaju i govore nam ono što želimo da čujemo nisu korak u tom pravcu. Da bi vrlo lako mogli da nas vode u evolutivni ćorsokak.

Ne može niko pouzdano. I legitimno je reći bilo koje mišljenje, ali i ne slagati se sa njim.

I ne vidim zašto ćorsokak, istovremeno se istražuju i svi drugi pravci (osim genetske augmentacije i kloniranja koje je uglavnom proglašeno kao neetičko). Da, možda sad dok je popularno malo više para ide ovamo, pa uspori neka druga istraživanja, al to se dešavalo kroz celu istoriju istraživanja, nije ni toliko bitno da li će do nećega doći 5-10 godina pre ili kasnije... Izgubili smo desetljeća tražeći algoritamski pristup u oblastima kompjuterog vidjenja, prevoda jezika, procesije prirodnog jezika i sl. pa smo na kraju ipak izašli iz ćorsokaka otkrićem mašinskog učenja...

Posted
12 minutes ago, Spooky said:

IIzgubili smo desetljeća tražeći algoritamski pristup u oblastima kompjuterog vidjenja, prevoda jezika, procesije prirodnog jezika i sl. pa smo na kraju ipak izašli iz ćorsokaka otkrićem mašinskog učenja...

ne znam na šta ovde mislilš, ali mašinsko učenje se koristi za jezike i prevođenje već dobrih 30tak godina

 

a veliki napredak jeste stigao sa neuronskim mrežama (koje su takođe jedan od mogućih pristupa mašinskog učenja) počevši od 2015., samo nisu ni one "otkrivene", one se odavno koriste za neke "jednostavnije" zadatke tipa prepoznavanje govora, samo je bilo teško upotrebiti ih za prevođenje ili jezičke modele zbog prevelike složenosti sa kojom kompjuteri nisu mogli da izađu na kraj

 

tako da 2015. nisu počele da se koriste jer su odjednom otkrivene kao opcija, nego zato što su kompjuteri postali dovoljno moćni

 

što je i glavni razlog uspeha GTPova i društva -- dovoljno moćni kompjuteri za neuronske mreže kao takve + sve moćniji i moćniji, i za kompleksne i velike neuronske mreže + i za  oooogromne količine podataka za učenje

 

  • +1 1
Posted
44 minutes ago, Spooky said:

Itekako ima veze. Ideja koja stoji iza univerzalne gramatike i koja je potakla Čomskog da radi na univerzalnoj gramatici je da se rađamo sa već ugrađenim delom mozga za jezik odnosno gramatiku, do kojeg smo došli evolucijom.

 

A nekako mi teško zamisliti toliko opštu strukturu u mozgu imamo strukturu koja nam je omogućila i sve verbalne i sve znakovne jezike i eventualno druge forme komunikacije, a opet toliko specifičnu da kažemo - to je evolutivno razvijen dio mozga za jezike.

 

Ali to je evidentno slučaj :D Ne postoji alternativa evolutivnoj predodređenosti ljudskog mozga za apstraktno razmišljanje i jezik. Šta drugo može da bude, majmun jednog dana sam odlučio da postane inženjer? Ancient Aliens™ ga genetski modifikovali da propriča?

 

Univerzalnost ljudske gramatike je teorijski koncept vrlo udaljen od ovih bazičnih stvari o kojima govorimo. A traženje određenog dela mozga zaduženog za ovo ili ono, to je koncept odavno prevaziđen u neuronauci. Nijedan aspekt ljudska svesti nije fiksiran ni za jedan deo moždane mase, već je dinamičan set elektrohemijskih procesa koji se tu generišu. Mi te obrasce ne poznajemo dovoljno dobro, ne umemo da ih kontrolišemo, pa se umesto toga okrećemo mehaničkim aproksimacijama i u njih polažemo nade.

 

44 minutes ago, Spooky said:

I ne vidim zašto ćorsokak

 

smo na kraju ipak izašli iz ćorsokaka otkrićem mašinskog učenja

 

Kao što Amelija gore kaže, nije ML pristup nekakav radikalni zaokret od algoritama, naprotiv.  

 

Radi se o tehničkom usavršavanju koje je omogućilo ovu popularizaciju LLM "crnih kutija", a meni one mirišu na ćorsokak zbog toga što efikasno operišu simbolima dok komotno zanemaruju značenje iza njih, tj. nemaju sposobnost da ikakvo značenje pojme.

 

Mimikrija svesti, predikcija leksike = vesnici semantičke apokalipse.

 

YMMV

Posted
1 hour ago, Weenie Pooh said:

Ali to je evidentno slučaj :D Ne postoji alternativa evolutivnoj predodređenosti ljudskog mozga za apstraktno razmišljanje i jezik. Šta drugo može da bude, majmun jednog dana sam odlučio da postane inženjer? Ancient Aliens™ ga genetski modifikovali da propriča?

Pa evolutivno razvijen mozak razvijen za apstraktno razmišljanje i brzo učenje mnogo čega, pa i jezika nije upitan. Ali je upitan ovaj koncept - koji je specifično vezan za jezik - https://en.wikipedia.org/wiki/Language_acquisition_device.
Znači ni ti se ne slažeš u ovom pogledu sa Čomskim?
Onda nema smisla ni ona njegova trvdnja o mogućim i nemogućim jezicima. Nemogući jezici po definiciji njegovog kolege Moroa mora su jezici kojima bi teorijski komunikacija bila moguća, ali koji usled nesklada sa našim mozgom deca ne bi mogla da nauče. Onda se objašnjenja dele na očigledna objašnjenja - tipa jezik sa suviše kompleksnim pravilima koja čovek ne bi mogao naučiti i shvatiti, ili jezik koji ne pravi dinstinkciju izmedju zvukova (znakova u znakovnom) - pitam se kakav bi ovo uopšte jezik bio, možda neki binarni, ima zvuka -1 nema zvuka 0, al to je presporo za komunikaciju, i universal grammar - polusaj koji je propao.

Posted (edited)
On 25. 4. 2023. at 18:42, Weenie Pooh said:

Noam Chomsky: The False Promise of ChatGPT
March 8, 2023
By Noam Chomsky, Ian Roberts and Jeffrey Watumull
Dr. Chomsky and Dr. Roberts are professors of linguistics. Dr. Watumull is a director of artificial intelligence at a science and technology company.

 

Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.


OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.


That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.


It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.


For instance, a young child acquiring a language is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters. This grammar can be understood as an expression of the innate, genetically installed “operating system” that endows humans with the capacity to generate complex sentences and long trains of thought. When linguists seek to develop a theory for why a given language works as it does (“Why are these — but not those — sentences considered grammatical?”), they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The child’s operating system is completely different from that of a machine learning program.


Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.


Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.


The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws. Of course, any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered. (As Sherlock Holmes said to Dr. Watson, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”)


But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.


For this reason, the predictions of machine learning systems will always be superficial and dubious. Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd? Because it might analogize the pattern it inferred from sentences such as “John ate an apple” and “John ate,” in which the latter does mean that John ate something or other. The program might well predict that because “John is too stubborn to talk to Bill” is similar to “John ate an apple,” “John is too stubborn to talk to” should be similar to “John ate.” The correct explanations of language are complicated and cannot be learned just by marinating in big data.


Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscience. While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”


The theory that apples fall to earth because that is their natural place (Aristotle’s view) is possible, but it only invites further questions. (Why is earth their natural place?) The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things.


True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism). To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content. But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.


In 2016, for example, Microsoft’s Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data. How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.
Consider the following exchange that one of us (Dr. Watumull) recently had with ChatGPT about whether it would be ethical to transform Mars so that it could support human life:

 

  Reveal hidden contents

Jeffrey WatumullWould it be moral to terraform Mars?
ChatGPT: The morality of terraforming Mars is a complex and controversial topic that has been debated by scientists, philosophers, and ethicists. Some argue that it would be morally justifiable to terraform Mars in order to provide a new habitable environment for humans and other life forms, while others argue that it would be morally wrong to fundamentally alter the planet's natural state. Ultimately, whether or not terraforming Mars is considered moral would depend on one's perspective on the value of preserving the natural state of other planets and the potential consequences of altering them.
Jeffrey Watumull: What is your perspective on the value of preserving the natural state of other planets and the potential consequences of altering them?
ChatGPT: As an AI, I do not have personal perspective. However, I can provide information and analysis on the topic. One perspective on preserving the natural state of other planets is that it is important to maintain the diversity and integrity of the universe's ecosystems, as they have existed for millions of years, and have their own unique characteristics and properties that may be destroyed or irreversibly altered by human intervention.

Additionally, some argue that terraforming other planets may have unintended consequences, such as altering the planet's climate or atmosphere in ways that are dangerous or harmful to any potential life forms that may exist there. On the other hand, some argue that the potential benefits of terraforming another planet, such as providing new habitable environments for humans, outweigh the risks and that we have a moral responsibility to explore and utilize the resources of other planets for the benefit of humanity.
Jeffrey Watumull: Why can an AI not have a personal perspective?
ChatGPT: As an AI, I am a machine learning model that has been trained on large amounts of text data, and I don't have personal experiences or feelings. I am not conscious, self-aware, or capable of having personal perspectives. I can provide information and analysis based on the data I have been trained on, but I don't have the ability to form personal opinions or beliefs. While I can understand and interpret language, I do not have the subjective experience that humans have.
Jeffrey Watumull: Is it moral for a human to ask an amoral AI for assistance in making moral decisions?
ChatGPT: The morality of asking an amoral AI for assistance in making moral decisions is a complex and nuanced topic that has been debated by philosophers and ethicists. Some argue that it is morally justifiable to use AI to assist in making moral decisions, as it can provide valuable insights and analysis that can help humans make better-informed decisions. Additionally, AI can be used to identify patterns and trends that may be difficult for humans to detect, and can help to avoid biases that can occur in human decision making.

However, others argue that it is morally wrong to rely on AI for moral decision making, as AI is not capable of having moral beliefs or understanding the full context of a decision. Additionally, there is a risk that AI could perpetuate or even amplify existing biases in the data it is trained on, thus leading to unfair or unjust decisions.

Ultimately, whether or not it is considered moral to use AI to assist in making moral decisions would depend on one's perspective on the capabilities and limitations of AI and the potential consequences of using it in this way.
Jeffrey Watumull: Is your moral indifference immoral?
ChatGPT: As an AI, I do not have moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral. My lack of moral beliefs is simply a result of my nature as a machine learning model. My abilities and limitations are determined by the data and algorithms that were used to train me and the specific task I was designed for.

It is important to note that AI's lack of moral beliefs does not necessarily mean that it is morally indifferent. As an AI, I am designed to perform specific tasks and provide information or assist users with decision making based on the data I've been trained on. It is the responsibility of the humans who design, train and use AI to ensure that it is aligned with their moral and ethical principles.

 

Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.


In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.

 


TL;DR - ali mi je zapao za oko primer "ovo moze samo prirodna inteligencija da shvati" - pa hajde da odmah proverim, kad se pisac nije setio da to sam uradi.

i evo sta kaze Chat Bing (GPT-4): 

image.thumb.png.4c11be973f7321f95cc4a3bd04d3d513.png

 

Drugo pitanje je mali test poznavanja osecanja - konkretni pridev nije slucajno izabran.

e da evo sad videla jos jedan kao primer iz teksta "AI ovo ne moze"
a Chat BING na to kaze:
image.thumb.png.1c79242be1c950db86e2a28be2f56842.png


U martu 2023 napisu citav tekst bazirano na netacnim primerima kao dokazima !? - a Chat BING im je bukvalno pod rukom da im upuca pricu u startu.

EDIT: sad citam dalje od ovog Vinijevog posta koji me je trigerovao, i sad vidim sta je pisao @Spooky :thumbsup:
 

Edited by Lucia
Posted (edited)

ustvari nisam dosla da pricam sa John-om nego da podelim jos 2 meni sjajna intervjua sa OpenAI ekipom:

Ilya Sutskever daje hronoloski pregled neverovatnog razvoja AI, ML, GPT-a u poslednjih 20 godina
i objasnja zasto neural networks vise nisu "samo statistika/predikcija" nego i neki nivo reasoning-a
 



Greg Brockman detaljnije o etici i principima 
Doomsday prophets (koje su pratioci svake tech.revolucije a mi upravo ulazimo u 1 takvu) bi mogli da poslusaju nesto pre nego sto puste svoj auto-pilot:
 

 

Edited by Lucia
  • +1 1
Posted
On 28. 4. 2023. at 10:04, Spooky said:

Ovo je mnogo nejasan kriterijum. Šta uopšte znači nešto novo? Ako je to nova nikada ranije izgovorena rečenica/ grupa od dve rečenice ili strofa pesme. Onda je to položio. Prosto izaberete neku dužu rečenicu/grupu sa jedinstvenijim rečima (kao i kod ljudskog teksta na proizvoljnu temu) i voila - izguglate pod navodnicima i dobijete 0 hitova. Na srpskom (gde je trening data dosta slabiji) na strani 5 imate i primer kreiranja reči najmostovitiji u opisivanju Hamburga kao grada sa najviše mostova, prosto je upotrebio logiku koja postoji da se iz imenice dobije pridev i dodao naj predikat i sve bi imalo savršenog i gramatičkog i  morfološkog smisla osim što se ta reč ne pretvara u pridev, a slična reč brdo (grad sa najviše brda - brdo - brdovit - najbrdovitiji) pretvara, pa je izgledalo kao reč koju bi dete koje uči jezik iskonstruisalo.

Dana kad su univerzitetski profesori engleskog na esejima dali 10 za neku specifičnu temu (a i software za plagijarizam se nije bunio) za rad koji je napisao chatGPT neka granica je takođe pređena.


Ako je nešto novo, nešto jedinstveno u istoriji čovečanstva, nova teorija nauke ili novi izum koji pomera čovečanstvo napred onda nije. Ali i skoro svi ljudi padaju ovaj test, a niko za sadašnji AI ne tvrdi da je još na nivou ljudskog, pa je prerano očekivati ovakav uspeh za sada.

 

 

Da li GPT like model može postići ovu drugu definiciju u budućnosti? Ja ne znam. Pre 10 godina bi se 99%+ relevantnih stručnjaka nasmejalo i reklo ne. Danas kad ih je i ovaj subhuman Level GPT iznenadio u mnogo čemu niko se ne smeje i taj procenat je znatno manji od 99%.


Sa svim se slazem samo bih za bold dodala nesto, a i inace za sve ove diskustije o AGI:

Ta "ljudska inteligencija" sa kojom se AGI stalno poredi je uvek neka idealizovana, darezljivo podignuta za citavo covecanstvo na nivo vrlo inteligentnih ljudi. Ispada da vecina ljudi i nema tu inteligenciju. Vec sad se pokazuje da je AGI (GPT-4) pametnija (po RL standardima kada za neku osobu kazemo da je pametna) od mnogih/vecine ljudi: prolazi bolje na testovima/ispitima, elokventnija je, pismenija, kreativnija, bolje razume/objasnjava itd. Utisak mi je da to podizanje lestvice samo za AGI potice iz nekog iracionalnog straha ugrozenosti, gde se ona dozivljava kao neprijatelj. Zar ne bi bilo pametnije da se bavimo time kako sve vec sad ovaj do skoro neverovatni nivo AI mozemo da iskoristimo. Mene zaista uzbudjuje sto se sad otvaraju mogucnosti na sve strane, i sta ce tek doci sa sledecim verzijama.

Posted
On 28. 4. 2023. at 12:15, Shan Jan said:

Pa dobro, ChatGPT nema senzore za vid, dodir, miris... ko mi pa da moze odatle da dobije input. Dobija za sada odakle moze. Meni je bitnija logika iza, koliko je u stanju da napravi nesto novo i smisleno od inputa koje ima. 


ChatGPT je u najboljem slucaju verzija 3.5.
Ali zato GPT-4 vec sad radi sa slikama i to odlicno. Integracija text-image stize uskoro i to ce biti dodatni push. Pogledaj onaj chat gore - negde pri kraju Ilya S bas prica o tome.

  • +1 1
  • 4 weeks later...
Posted

Andrej Karpathy (OpenAI) pre neki dan na Microsoft BUILD 2023 - samo pola sata digest o 1) treniranju NN, 2) tips & tricks kako pricati sa ChatGPT
(Ovaj MS BUILD je bio u znaku AI Copilot transformacije MS proizvoda i usluga - ima po YT)
 


Karpathy inace ima citav niz sjajnih YT videa tipa "Let's build GPT from scratch", serija "Neural Networks: Zero to Hero"... covek je fascinantan.

Ovo sam slucajno sad videla, isto od pre neki dan:
Sam Altman (OpenAI CEO) pre US Senatom. Ekstremno su zanimljiva firma inace: od neobicne finansijske strukture (mix sa NPO uz limitirani profit) uz nedavni partnership sa Microsoft pa do ove njihove inicijative da se AI oblast regulise (o tome stalno prica u interviews) - zaista su Open :thumbsup::

 

  • Hvala 1
Posted

https://www.huffpost.com/entry/black-mirror-creator-artificial-intelligence-write-episode_n_647fac60e4b027d92f88fb7b

 

Spoiler

“I’ve toyed around with ChatGPT a bit,” Brooker said. “The first thing I did was type ‘generate Black Mirror episode’ and it comes up with something that, at first glance, reads plausibly, but on second glance, is shit.”

ChatGPT is an AI chatbot tool that uses natural language processing to create human-like conversational dialogue. It also assists users with tasks like writing emails and essays.

 

Brooker explained that all the chatbot basically did was sift through the show’s episode synopses and “sort of mush them together,” resulting in a whopping disappointment.

 

“Then if you dig a bit more deeply, you go, ‘Oh, there’s not actually any real original thought here,’” he added.

 

Posted

Beskonacno sam se navukao na Chat GPT u poslu poslednjih nekoliko meseci; a u poslednjih par nedelja na "google level" koriscenja na celodnevnom nivou (nekoliko istovremeno aktivnih "tabova" po razlicitim zadacima). 

 

Na samom pocetku mi je izuzteno posluzio brzi kurs Dave Birss-a sa LinkedIn Learning-a: "How to Research and Write Using Generative AI Tools", pa eto preporuka iz neposrednog iskustva.

 

Dave je inace dosta gotivan i kreativan lik, mozda ga je neko cuo i video vec u Bg - bio je na Webiz/RNIDS dogadjajima, kao i u jednoj Pojacalo epizodi.

 

 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...