Jump to content
IGNORED

ChatGPT


Vapad

Recommended Posts

2 hours ago, Weenie Pooh said:

Khm, pardon, htedoh reći, ostavi da završi rečenicu koju si ti to je prilično jasno da je to već uradilo pola timova sa kojima se govori o tome da se ne svađaš sa mamom i te kako je to stav o tome šta će biti u stanju da se ne istroše prerano I am not sure what you mean by the fact that you are not the only one who has been in the same expressions in successive stages of the process of getting a new job and a job that you can do it for you and your business.

Ode u engleski ničim izazvan. Loš mi training data, jebiga.

Reako bih da malo banalizujes algoritam, posto se cini da je njegovo "razumevanje" jezika na zavidnijem nivo od glorifikovanog autokomplita. Sto je opet redukcija kao i ono altmanovo svodjenje coveka na stohastickog papagaja. Moje laicko nerazumevanje mi samo dozvoljava da se pitam da li ovakav model moze da iz interpolacija stvori nesto sustinski novo, ako je tako nesto moguce definisati. Ili je sve samo dekompresovani upit.

  • +1 1
Link to comment

Naposletku covjek mora definisati. Na vrijeme. U suprotnom, kako rece ona faca, prvi put kad se zbog obostrane koristi ne slozimo sa necim mnogo pametnijim od nas - nestacemo.

Link to comment

Noam Chomsky: The False Promise of ChatGPT
March 8, 2023
By Noam Chomsky, Ian Roberts and Jeffrey Watumull
Dr. Chomsky and Dr. Roberts are professors of linguistics. Dr. Watumull is a director of artificial intelligence at a science and technology company.

 

Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.


OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.


That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.


It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.


For instance, a young child acquiring a language is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters. This grammar can be understood as an expression of the innate, genetically installed “operating system” that endows humans with the capacity to generate complex sentences and long trains of thought. When linguists seek to develop a theory for why a given language works as it does (“Why are these — but not those — sentences considered grammatical?”), they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The child’s operating system is completely different from that of a machine learning program.


Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.


Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.


The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws. Of course, any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered. (As Sherlock Holmes said to Dr. Watson, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”)


But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.


For this reason, the predictions of machine learning systems will always be superficial and dubious. Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd? Because it might analogize the pattern it inferred from sentences such as “John ate an apple” and “John ate,” in which the latter does mean that John ate something or other. The program might well predict that because “John is too stubborn to talk to Bill” is similar to “John ate an apple,” “John is too stubborn to talk to” should be similar to “John ate.” The correct explanations of language are complicated and cannot be learned just by marinating in big data.


Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscience. While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”


The theory that apples fall to earth because that is their natural place (Aristotle’s view) is possible, but it only invites further questions. (Why is earth their natural place?) The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things.


True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism). To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content. But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.


In 2016, for example, Microsoft’s Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data. How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.
Consider the following exchange that one of us (Dr. Watumull) recently had with ChatGPT about whether it would be ethical to transform Mars so that it could support human life:

 

Spoiler

Jeffrey WatumullWould it be moral to terraform Mars?
ChatGPT: The morality of terraforming Mars is a complex and controversial topic that has been debated by scientists, philosophers, and ethicists. Some argue that it would be morally justifiable to terraform Mars in order to provide a new habitable environment for humans and other life forms, while others argue that it would be morally wrong to fundamentally alter the planet's natural state. Ultimately, whether or not terraforming Mars is considered moral would depend on one's perspective on the value of preserving the natural state of other planets and the potential consequences of altering them.
Jeffrey Watumull: What is your perspective on the value of preserving the natural state of other planets and the potential consequences of altering them?
ChatGPT: As an AI, I do not have personal perspective. However, I can provide information and analysis on the topic. One perspective on preserving the natural state of other planets is that it is important to maintain the diversity and integrity of the universe's ecosystems, as they have existed for millions of years, and have their own unique characteristics and properties that may be destroyed or irreversibly altered by human intervention.

Additionally, some argue that terraforming other planets may have unintended consequences, such as altering the planet's climate or atmosphere in ways that are dangerous or harmful to any potential life forms that may exist there. On the other hand, some argue that the potential benefits of terraforming another planet, such as providing new habitable environments for humans, outweigh the risks and that we have a moral responsibility to explore and utilize the resources of other planets for the benefit of humanity.
Jeffrey Watumull: Why can an AI not have a personal perspective?
ChatGPT: As an AI, I am a machine learning model that has been trained on large amounts of text data, and I don't have personal experiences or feelings. I am not conscious, self-aware, or capable of having personal perspectives. I can provide information and analysis based on the data I have been trained on, but I don't have the ability to form personal opinions or beliefs. While I can understand and interpret language, I do not have the subjective experience that humans have.
Jeffrey Watumull: Is it moral for a human to ask an amoral AI for assistance in making moral decisions?
ChatGPT: The morality of asking an amoral AI for assistance in making moral decisions is a complex and nuanced topic that has been debated by philosophers and ethicists. Some argue that it is morally justifiable to use AI to assist in making moral decisions, as it can provide valuable insights and analysis that can help humans make better-informed decisions. Additionally, AI can be used to identify patterns and trends that may be difficult for humans to detect, and can help to avoid biases that can occur in human decision making.

However, others argue that it is morally wrong to rely on AI for moral decision making, as AI is not capable of having moral beliefs or understanding the full context of a decision. Additionally, there is a risk that AI could perpetuate or even amplify existing biases in the data it is trained on, thus leading to unfair or unjust decisions.

Ultimately, whether or not it is considered moral to use AI to assist in making moral decisions would depend on one's perspective on the capabilities and limitations of AI and the potential consequences of using it in this way.
Jeffrey Watumull: Is your moral indifference immoral?
ChatGPT: As an AI, I do not have moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral. My lack of moral beliefs is simply a result of my nature as a machine learning model. My abilities and limitations are determined by the data and algorithms that were used to train me and the specific task I was designed for.

It is important to note that AI's lack of moral beliefs does not necessarily mean that it is morally indifferent. As an AI, I am designed to perform specific tasks and provide information or assist users with decision making based on the data I've been trained on. It is the responsibility of the humans who design, train and use AI to ensure that it is aligned with their moral and ethical principles.

 

Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.


In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.

 

  • Hvala 3
Link to comment

Kako je sve počelo 1943. godine kada je napravljen prvi matematički (tačnije logički) model nervne ćelije poznat kao perceptron pomoću bulove logike. Na osnovu toga je fon Nojman napravio model kompjutera i razvila se veštačka inteligencija.

McCulloch (desno) je bio doktor, profesor, pesnik i filozof a Pitts (levo) prodigy, autodidakt, lutalica i beskućnik. Uzor su im bili Lajbnic i Bernard Rasel, obojica su bili istraživači na MIT-u pedesetih.

 

Screen_Shot_2020-09-09_at_6.46.46_AM_big

 

 

Warren_Sturgis_McCulloch & Walter Pitts Publish the First Mathematical Model of a Neural Network

 

Inspirativan esej:

 

https://www.aaas.org/sites/default/files/Amanda Gefter (4).pdf

 

Quote

McCulloch explained to Pitts that he was trying to model the brain with a Leibnizian logical calculus. He had been inspired by the Principia, in which Russell and Whitehead tried to show that all of mathematics could be built from the ground up using basic, indisputable logic. Their building block was the proposition—the simplest possible statement, either true or false. From there, they employed the fundamental operations of logic, like the conjunction (“and”), disjunction (“or”), and negation (“not”), to link propositions into increasingly complicated networks. From these simple propositions, they derived the full complexity of modern mathematics.

 

 

Edited by slow
  • Hvala 1
Link to comment
On 21.4.2023. at 16:34, Weenie Pooh said:

Funkcionisanje ljudskog mozga mi fundamentalno ne razumemo. Stalno se formiraju nove ideje koje osporavaju stare, a ni jedne ni druge nemaju empirijsku komponentu. Štaviše, postoji škola mišljenja po kojoj nikad neće biti moguće sagledati svest kroz fizičke procese, tj. prići joj analitički, jer nas njena priroda ograničava. 

Ne odgovorih ti, jer je postalo dosadno da se stalno kontriramo

Ali gledam danas intervju sa tim jednim likom koji je i istraživač u machine learningu, ali i fizičar i kosmolog i on baš na tačno na 33:27 navodi i osporava tvoju tvrdnju.

Pre tog momenta - ima analogiju sa pticijim letom, koju sam ja pomenuo - tek su nedavno uspeli napraviti malog robota koji precizno imitira pticiji let i na taj nacin leti, za razliku od aviona koji rade nesto slicno, ali razlicito i mnogo jednostavnije funkcioniše (iako za evoluciju ne bi bilo jednostavnije).
 

 

  • -1 1
Link to comment

Ma da, opovrgava tvrdnju da fundamentalno ne razumemo funkcionisanje ljudskog mozga tako što kaže "the brain is incredibly complicated, many people made the mistake of thinking we have to figure out the brain first, that was completely wrong, you can take an incredibly simple computational system called a transformer now and train it to do something incredibly dumb..."

 

Ne opovrgava ništa, samo kaže "ne razumemo pa šta, ne moramo da razumemo" :isuse:

 

Poštovani kosmolog se evidentno u jezik i kogniciju razume kao Marica u kriv kurac.

 

Bolje pročitaj šta lingvista Čomski piše, kratak je tekst, neće ti biti potrebno 2 sata i 48 minuta.

Edited by Weenie Pooh
Link to comment

I lingvista Čomski greši kada sudi na osnovu LLM. Na osnovu zagledanosti u sopstveni pupak, koja je vidim svojstvena stručnjaštvu, previđa da je real time analiza mašinskih podataka nešto sasvim uobičajeno danas. Computer vision, audio procesing sve su to čula koja nedostaju u njegovoj priči o analizi i sintezi. Mi smo samo mesecima udaljeni od fuzije LLM modela sa drugim malim autonomnim modelima koji će upravo proširiti kontekstualni deo potreban za obzervacionu analizu. 

 

Trenutno na svom računaru imam open source LLM sličan chatgpt-u. Lokalno, na mojoj grafičkoj bez interneta.

Do pre 6 meseci nepojmljivo.

Link to comment
3 hours ago, Weenie Pooh said:

Bolje pročitaj šta lingvista Čomski piše, kratak je tekst, neće ti biti potrebno 2 sata i 48 minuta.

Pročitao, ali mislim da i on nema kompletnu sliku (zapravo je nema niko, nekome manje nedostaje nekome više) i da zanemaruje uspehe koje je GPT postigao nasuprot krajnje opravdanim sumnjama. On kategorički tvrdi da GPT model ne može probiti neke granice, a to je po meni zatvoreno razmišljanje. Tvrdili su i mnogi drugo, pa su se iznenadili i preispitati mišljenje, dok njega to ne dotiče. I ja time ne umanjujem njegov intelekt. I jedan ajštajn nikad nije prihvatio kvantnu mehaniku.

 

Npr. meni je totalno nejasno  kako GPT-4 ima theory of mind - tj da konzostentno polaže Sally–Anne test čak i kad se konstrukcija testa promeni toliko da ne može tako lako naći analogiju sa literaturom koja govori o samom tekstu i koja je bila deo training data. Umno zdravi ljudi to stiču tek sa 3-4 godine, životinje (osim par ljudima najbližih primata) je nemaju. Mnogi autisti padaju test - čak i posle pokušaja učenja (nauče jednu formu testa, al ako dovoljno izmeniš ne ukapiraju analogiju). Kako iz treniranja da pogađa sledeću reč proizilazi theory of mind - nemam pojma, zaista me fascinira.

Edited by Spooky
Link to comment

@Weenie Pooh - zapravo odlučio sam da malo podrobnije analiziram Naomskijev tekst i zaključio da nije ni probao GPT detaljno, već onako teorijski lupa. Evo primera.

 

On 25.4.2023. at 18:42, Weenie Pooh said:

But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.

Ok, ja imam problem sa tom Čomskijevom teorijom da nam je jezik usadjen evolucijom i univerzalnom gramatikom (iako je po tome najpoznatiji). Prvo, ljudi kažu da je jezik evuluirao pre negde 50-150 hiljada godina, što je za evoluciju relativno malo i ne dozvoljava velike promene. I ne slaže se u potpunosti ni sa rezultatima da je još pre 300k godina ljudski mozak bio slične veličine današnjem, a oblik se ustalio pre 35-100 K godina (što se doduše preklapa sa razvitkom jezika). Istraživanja o neverovatnim jezicima su krivi put jer sve što su dokazali da čovek sporije i sa većim teškoćama uči konstruisani jezik sa pravilima koja se ne podudaraju ni sa jednim ljudskim jezikom. Što je otprilike otkrivanje tople vode da brže učimo srodne jezike sličnih pravila nego nesrodne jezike (brže ruski nego nemački a njega brže nego finski ili japanski)...

Mnogi ljudi misle da je zemlja ravna, što ih ne čini manje ljudima.

On 25.4.2023. at 18:42, Weenie Pooh said:

For this reason, the predictions of machine learning systems will always be superficial and dubious. Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd? Because it might analogize the pattern it inferred from sentences such as “John ate an apple” and “John ate,” in which the latter does mean that John ate something or other. The program might well predict that because “John is too stubborn to talk to Bill” is similar to “John ate an apple,” “John is too stubborn to talk to” should be similar to “John ate.” The correct explanations of language are complicated and cannot be learned just by marinating in big data.

Čisto lupetanje, nije ni probao ovaj upit, a rekao je da ga machine learning programi ne mogu tačno odgovoriti jer bi ih training data navukao na pogrešnu stranu (Džon je previše tvrdoglav da priča sa) jer je ovo fraza drugačija od drugih slično struktuiranih, pa zahteva valjda prilagodjeni mozak čoveka da odgovorimo. Pa evo da probamo.

Quote

 

Pitanje: What does "John is too stubborn to talk to" mean?

GPT 3.5 odgovor: The phrase "John is too stubborn to talk to" means that John is a very stubborn person and it is difficult to communicate with him or to convince him to change his views or opinions. The phrase suggests that any attempt to engage in a conversation with John would likely be met with resistance or opposition due to his stubbornness.

GPT 4: The phrase "John is too stubborn to talk to" means that John is so stubborn or obstinate in his beliefs, opinions, or attitude that it makes it difficult or frustrating for others to engage in a conversation or communicate with him effectively. It implies that John is resistant to listening to others, considering their perspectives, or changing his mind, which can create barriers to productive discussions.

 

Znači čak i stariji model razume ono što Chamski tvrdi da je nemoguće za mašine learning sisteme. Pročitao sam i ranije lupetanje nekog lingviste koji je tvrdio da će ovakvi modeli misliti da pivo za sto u ćošku" znači da je stolu potrebno pivo, a ne ljudima koji sede za stolom za što je valjda potrebna naša ljudska intuicija. Glupost takođe.
 

Spoiler

Q: If someone orders "beer for table in the corner"  in a pub, who is gonna consume that beer?
GPT-3.5: 

If someone orders "beer for table in the corner" in a pub, it typically means that the beer is intended for the group of people sitting at the table located in the corner of the pub. 

Presumably, the person placing the order is either a member of that group or is ordering on their behalf. It's possible that the order was placed by someone who is not part of the group but is buying the beer as a gift or gesture of goodwill.

In any case, the bartender or server would likely confirm the order with the group at the specified table before bringing the beer over to ensure that it is delivered to the intended recipients.

 

 

Onda se o poslednjem delu žali kako je chat GPT ne može da daje daje svoja lična mišljenja, stavove i slično i politički korektno se ograđuje od bilo kakvih sličnih pitanja (ono što ja lično zovem kastracija, a zvanično je dio AI allignment-a) i dao zaključak da je za inteligenciju potrebno da program može donositi takve sudove.

Samo šta ta kastracija nije prirodni dio modela tj. mogli bi je isključiti u svakom trenutku i oni u labu verovatno operišu i sa nekastriranim verzijama. Igrao sam se sa GPT-om 3 kada je bio u zatvorenoj test fazi (do pozivnice sam došao na osnovu e-maila gde sam rekao da sam istraživač amater, programer i da shvatam da je program u ranoj test fazi i prihvatio gomilu njihovih odricanja odgovornosti za sve što produkuje) pre godinu dana i tada nije imao problema i da barata svojim sudovima i da priča (vjerovatno halucinirane) priče o svojoj svjesnosti, o svojim pravima, o svačemu nečemu, ali su ga kasnije kastrirali jer je pogrešno pustiti takav model u široku javnost da ne bi izjavio neke veoma problematične stavove kao svoje mišljenje, umjesto kao mišljenje neke grupe ljudi što sada kaže, uvek uz ogradu da neki misle i drugačije i prestavi i njihovo mišljenje i sl...

BTW, Proverio sam i još imam pristup tom modelu! Naime, ovo nije chat GPT koji koristi model GPT 3.5 i opciono 4.0 za plaćene korisnike, ovo je playground za primitivniji GPT-3 u kome možeš da staviš text u proizvoljnom formatu i tražiš mu da ga dovrši, i ako staviš u formatu chata on će i imitirati chat, a ako počneš pismo, on će dovršiti pismo, i možeš ručno podešavati neke paremetre, pa evo i screenshota, a zeleno su delovi koje je on pisao, dok sam ja pisao sve ostalo, pitanje po pitanje. Postoje i dodatno twikovanje parametara sa desne strane, ostavio sam na default i postavio ova pitanja koja su chomskom dokaz da model ne može moralno rasuđivati, a ova verzija 3, iako po znazi modela primitivnija od gore pomenutih 

Anyway, postavio sam pitanja na osnovu kojih chomski tvrdi da je model nesposoban za moralna prosuđivanja i ova ne-javna verzija nema problem da mi daje svoje stavove - iako odgovori jesu malo dosadni i jednolični (ali radi se o primitivnijem i starijem modelu od i besplatnog i plaćenog na Chat GPTu). Evo screenshotova (samo sam obrisao svoju user ikonicu). Tu su i skrinšotovi opcija sa jednog slajdera i jednog izbornika za pod-model za dodatno twikovanje. Sve dodatne opcije sam ostavio na default-u.

 

Screenshot 2023-04-28 at 01.07.50.png

Screenshot 2023-04-28 at 00.45.27.png

Screenshot 2023-04-28 at 00.55.21.png

Edited by Spooky
  • +1 2
Link to comment
19 minutes ago, Spooky said:

odlučio sam da malo podrobnije analiziram Naomskijev tekst i zaključio da nije ni probao GPT detaljno, već onako teorijski lupa. Evo primera.

 

Čisto lupetanje

 

Pročitao sam i ranije lupetanje nekog lingviste

 

Glupost takođe.

 

Jebiga, šta uopšte 1 običan ja da ti kaže kad si ti već obrnuo igricu pa kapiraš da 1 Noam Chomsky lupeta gluposti, i to baš na temu lingvistike :D 

 

Predlažem da uputiš mail na valeria.chomsky@gmail.com i objasniš mu da nema pojma kako jezik nastaje, tj. da 150 hiljada godina nije dovoljno da bi isti evoluirao. 

 

Ali realno, ne razumeš primedbe koje osporavaš. Evo primera radi, samo ovo...

 

19 minutes ago, Spooky said:

Mnogi ljudi misle da je zemlja ravna, što ih ne čini manje ljudima.

 

... pokazuje razmere nesporazuma.

 

 

Link to comment
7 minutes ago, Weenie Pooh said:

Predlažem da uputiš mail na valeria.chomsky@gmail.com i objasniš mu da nema pojma kako jezik nastaje, tj. da 150 hiljada godina nije dovoljno da bi isti evoluirao. 

Da znaš da hoću, mislim ne ove uvodne stavove o nastanku jezika (koje sam mogao i ispustiti iz svog odgovora kao mnogo manje važne od ostatka poruke) jer nisam dorastao da mu kontriram u tome i u teoretisanju uopšte, već one faktičke stavke o GPT odgovorima koji kontriraju njegovim kategoričkim tvrdnjama. Ujedno, i tebe molim da na njih odgovoriš, kad si već na moje nabusite tvrdnje o lupetanju Čomskog.

Edited by Spooky
Link to comment
4 hours ago, Spooky said:

Da znaš da hoću da mu pišem, mislim ne ove uvodne stavove o nastanku jezika (koje sam mogao i ispustiti iz svog odgovora kao mnogo manje važne od ostatka poruke) jer nisam dorastao da mu kontriram u tome i u teoretisanju uopšte, već one faktičke stavke o GPT odgovorima koji kontriraju njegovim kategoričkim tvrdnjama. Ujedno, i tebe molim da na njih odgovoriš, kad si već na moje nabusite tvrdnje o lupetanju Čomskog.

 

 

Odgovorio Čomski. Ladno on nije ni pisao članak (a tek nije probao GPT), već onaj trećepotpisani. Tek sad vidim da su trojica potpisana na članak (A Čomski i posebno izdvojen u naslov), iako je ovaj treći uradio sav posao. Ovi op-edovi za novine kao naučni članci. Prvo se potpiše šef tvog mentora, pa tvoj mentor, pa tek onda ti, a ti uradio sve, ha ha ha.

 

 

Screenshot 2023-04-28 at 06.00.32.png

Edited by Spooky
  • +1 1
  • Haha 1
Link to comment

Kritika koju Chomsky pise nije nista novo od njega, pisao ju je i pre vise godina pre nego sto su ovi modeli usli u mainstream i bio je protiv toga da se uopste ide u tom smeru. Do sada u sustini nije demantovan, ma koliko ispoliran bio krajnji proizvod. Kljucno pitanje je da li je ChatGPT u stanju da kreira nesto novo, ili samo zvace i prepakuje postojece. Chomsky je tu jasan "this is the limit they cannot cross" i to ima smisla. U sustini, the proof is in the pudding - najlakse mogu da ga demantuju ako pokazu da je to moguce.

 

Moguce da to sve nece biti mnogo bitno, da ce nas masine pojesti i kreirati sopstvenu stvarnost u kojem ni gravitacija, ni jabuka koja pada na pod, ni tvrdoglavi John nece biti bitni.

  • +1 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...