Jump to content
IGNORED

Twitter v2.0 - powered by Elon


mustang

Recommended Posts

Just now, dùda said:

Hm, ne ode smerikančeta san, ode san svih u međuvremenu sjebanih, e da bi amerikanče moglo malo da se bahati 

Me nemam ja problem sa njima, neka ih: problem je njihovo tako treba... :D 

Link to comment

da, ovde u Ameriku™, se bahatimo i vozimo po sat vremena na posao jer zivimo po svemogucim suburbijama umesto da prihvatimo cuveni srpski urbani model: samo ti natalozi guzicu u Beograd pa kom obojci kom opanci

Link to comment

Da se vratimo na Muska.

 

Fenomenalan clanak iz NYT Magazina, gdje izemdju ostalog, imaju razgovori sa dvojicom vlasnika auta koji svjesno ulaze u to da su beta testeri. Igrom slucaja, obojica IT strucnjaci.

 

https://www.nytimes.com/2023/01/17/magazine/tesla-autopilot-self-driving-elon-musk.html?searchResultPosition=1

 

Ovo dijela sa jednim od njih, plus osvrt na pravnicke bitke koje mogu unistiti kompaniju:

 

Spoiler

The future of Tesla may rest on whether drivers knew that they were engaged in this data-gathering experiment, and if so, whether their appetite for risk matched Musk’s. I wanted to hear from the victims of some of the more minor accidents, but they tended to fall into two categories, neither of which predisposed them to talk: They either loved Tesla and Musk and didn’t want to say anything negative to the press, or they were suing the company and remaining silent on the advice of counsel. (Umair Ali, whose Tesla steered into a highway barrier in 2017, had a different excuse: “Put me down as declined interview because I don’t want to piss off the richest man in the world.”)

Then I found Dave Key. On May 29, 2018, Key’s 2015 Tesla Model S was driving him home from the dentist in Autopilot mode. It was a route that Key had followed countless times before: a two-lane highway leading up into the hills above Laguna Beach, Calif. But on this trip, while Key was distracted, the car drifted out of its lane and slammed into the back of a parked police S.U.V., spinning the car around and pushing the S.U.V. up onto the sidewalk. No one was hurt.

 

Key, a 69-year-old former software entrepreneur, took a dispassionate, engineer’s-eye view of his own accident. “The problem with stationary objects — I’m sorry, this sounds stupid — is that they don’t move,” he said. For years, Tesla’s artificial intelligence had trouble separating immobile objects from the background. Rather than feeling frustrated that the computer hadn’t figured out such a seemingly elementary problem, Key took comfort in learning that there was a reason behind the crash: a known software limitation, rather than some kind of black-swan event.

Last fall, I asked Key to visit the scene of the accident with me. He said he would do me one better; he would take me there using Tesla’s new Full Self-Driving mode, which was still in beta. I told Key that I was surprised he was still driving a Tesla, much less paying extra — F.S.D. now costs $15,000 — for new autonomous features. If my car had tried to kill me, I would have switched brands. But in the months and years after his Model S was totaled, he bought three more.

Did you know you can share 10 gift articles a month, even with nonsubscribers?

Share this article.

We met for breakfast at a cafe in Laguna Beach, about three miles from the crash site. Key was wearing a black V-neck T-shirt, khaki shorts and sandals: Southern California semiretirement chic. As we walked to our table, he locked the doors of his red 2022 Model S, and the side mirrors folded up like a dog’s ears when it’s being petted.

Key had brought along a four-page memo he drafted for our interview, listing facts about the accident, organized under subheadings like “Tesla Full Self-Driving Technology (Discussion).” He’s the sort of man who walks around with a battery of fully formed opinions on life’s most important subjects — computers, software, exercise, money — and a willingness to share them. He was particularly concerned that I understand that Autopilot and F.S.D. were saving lives: “The data shows that their accident rate while on Beta is far less than other cars,” one bullet point read, in 11-point Calibri. “Slowing down the F.S.D. Beta will result in more accidents and loss of life based on hard statistical data.”

 

Accidents like his — and even the deadly ones — are unfortunate, he argued, but they couldn’t distract society from the larger goal of widespread adoption of autonomous vehicles. Key drew an analogy to the coronavirus vaccines, which prevented hundreds of thousands of deaths but also caused rare deaths and injuries from adverse reactions. “As a society,” he concluded, “we choose the path to save the most lives.”

 

Elon Musk has summed up his attitude toward risk as “most good for most number of people.”Credit...Jae C. Hong/Associated Press

We finished breakfast and walked to the car. Key had hoped to show off the newest version of F.S.D., but his system hadn’t updated yet. “Elon said it would be released at the end of the week,” he said. “Well, it’s Sunday.” Musk had been hinting for weeks that the update would be a drastic improvement over F.S.D. 10.13, which had been released over the summer. Because Musk liked to make little jokes out of the names and numbers in his life, the version number would jump to 10.69 with this release. (The four available Tesla models are S, 3, X and Y, presumably because that spells the word “sexy.”)

Key didn’t want to talk about Musk, but the executive’s reputational collapse had become impossible to ignore. He was in the middle of his bizarre, on-again-off-again campaign to take over Twitter, to the dismay of Tesla loyalists. And though he hadn’t yet attacked Anthony Fauci or spread conspiracy theories about Nancy Pelosi’s husband or gone on a journalist-banning spree on the platform, the question was already suggesting itself: How do you explain Elon Musk?

“People are flawed,” Key said cautiously, before repeating a sentiment that Musk often said about himself: If partisans on both sides hated him, he must be doing something right. No matter what trouble Musk got himself into, Key said, he was honest — “truthful to his detriment.”

As we drove, Key compared F.S.D. and the version of Autopilot on his 2015 Tesla. Autopilot, he said, was like fancy cruise control: speed, steering, crash avoidance. Though in his case, he said, “I guess it didn’t do crash avoidance.” He had been far more impressed by F.S.D. It was able to handle just about any situation he threw at it. “My only real complaint is it doesn’t always select the lane that I would.”

After a minute, the car warned Key to keep his hands on the wheel and eyes on the road. “Tesla now is kind of a nanny about that,” he complained. If Autopilot was once dangerously permissive of inattentive drivers — allowing them to nod off behind the wheel, even — that flaw, like the stationary-object bug, had been fixed. “Between the steering wheel and the eye tracking, that’s just a solved problem,” Key said.

Soon we were close to the scene of the crash. Scrub-covered hills with mountain-biking trails lacing through them rose on either side of us. That was what got Key into trouble on the day of the accident. He was looking at a favorite trail and ignoring the road. “I looked up to the left, and the car went off to the right,” he said. “I was in this false sense of security.”

 

We parked at the spot where he hit the police S.U.V. four years earlier. There was nothing special about the road here: no strange lines, no confusing lane shift, no merge. Just a single lane of traffic running along a row of parked cars. Why the Tesla failed at that moment was a mystery.

Eventually, Key told F.S.D. to take us back to the cafe. As we started our left turn, though, the steering wheel spasmed and the brake pedal juddered. Key muttered a nervous, “OK. … ”

After another moment, the car pulled halfway across the road and stopped. A line of cars was bearing down on our broadside. Key hesitated a second but then quickly took over and completed the turn. “It probably could have then accelerated, but I wasn’t willing to cut it that close,” he said. If he was wrong, of course, there was a good chance that he would have had his second A.I.-caused accident on the same one-mile stretch of road.

Three weeks before Key hit the police S.U.V., Musk wrote an email to Jim Riley, whose son Barrett died after his Tesla crashed while speeding. Musk sent Riley his condolences, and the grieving father wrote back to ask whether Tesla’s software could be updated to allow an owner to set a maximum speed for the car, along with other restrictions on acceleration, access to the radio and the trunk and distance the car could drive from home. Musk, while sympathetic, replied: “If there are a large number of settings, it will be too complex for most people to use. I want to make sure that we get this right. Most good for most number of people.”

It was a stark demonstration of what makes Musk so unusual as a chief executive. First, he reached out directly to someone who was harmed by one of his products — something it’s hard to imagine the head of G.M. or Ford contemplating, if only for legal reasons. (Indeed, this email was entered into evidence after Riley sued Tesla.) And then Musk rebuffed Riley. No vague “I’ll look into it” or “We’ll see what we can do.” Riley receives a hard no.

Like Key, I want to resist Musk’s tendency to make every story about him. Tesla is a big car company with thousands of employees. It existed before Elon Musk. It might exist after Elon Musk. But if you want a parsimonious explanation for the challenges the company faces — in the form of the lawsuits, a crashing stock price and an A.I. that still seems all too capable of catastrophic failure — you should look to its mercurial, brilliant, sophomoric chief executive.

Perhaps there’s no mystery here: Musk is simply a narcissist, and every reckless swerve he makes is meant solely to draw the world’s attention. He seemed to endorse this theory in a tongue-in-cheek way during a recent deposition, when a lawyer asked him, “Do you have some kind of unique ability to identify narcissistic sociopaths?” and he replied, “You mean by looking in the mirror?”

 

 

 

But what looks like self-obsession and poor impulse control might instead be the fruits of a coherent philosophy, one that Musk has detailed on many occasions. It’s there in the email to Riley: the greatest good for the greatest number of people. That dictum, as part of an ad hoc system of utilitarian ethics, can explain all sorts of mystifying decisions that Musk has made, not least his breakneck pursuit of A.I., which in the long term, he believes, will save countless lives.

Unfortunately for Musk, the short term comes first, and his company faces a rough few months. In February, the first lawsuit against Tesla for a crash involving Autopilot will go to trial. Four more will follow in quick succession. Donald Slavik, who will represent plaintiffs in as many as three of those cases, says that a normal car company would have settled by now: “They look at it as a cost of doing business.” Musk has vowed to fight it out in court, no matter the dangers this might present for Tesla. “The dollars can add up,” Slavik said, “especially if there’s any finding of punitive damages.”

The many claims of the pending lawsuits come back to a single theme: Tesla consistently inflated consumer expectations and played down the dangers involved.

Slavik sent me one of the complaints he filed against Tesla, which lists prominent Autopilot crashes from A to Z — in fact, from A to WW. In China, a Tesla slammed into the back of a street sweeper. In Florida, a Tesla hit a tractor-trailer that was stretched across two lanes of a highway. During a downpour in Indiana, a Tesla Model 3 hydroplaned off the road and burst into flames. In the Florida Keys, a Model S drove through an intersection and killed a pedestrian. In New York, a Model Y struck a man who was changing his tire on the shoulder of the Long Island Expressway. In Montana, a Tesla steered unexpectedly into a highway barrier. Then the same thing happened in Dallas and in Mountain View and in San Jose.

The arrival of self-driving vehicles wasn’t meant to be like this. Day in, day out, we scare and maim and kill ourselves in cars. In the United States last year, there were around 11 million road accidents, nearly five million injuries and more than 40,000 deaths. Tesla’s A.I. was meant to put an end to this blood bath. Instead, on average, there is at least one Autopilot-related crash in the United States every day, and Tesla is under investigation by the National Highway Traffic Safety Administration.

Ever since Autopilot was released in October 2015, Musk has encouraged drivers to think of it as more advanced than it was, stating in January 2016 that it was “probably better” than a human driver. That November, the company released a video of a Teslanavigating the roads of the Bay Area with the disclaimer: “The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.” Musk also rejected the name “Copilot” in favor of “Autopilot.”

The fine print made clear that the technology was for driver assistance only, but that message received a fraction of the attention of Musk’s announcements. A large number of drivers seemed genuinely confused about Autopilot’s capabilities. (Tesla also declined to disclose that the car in the 2016 video crashed in the company’s parking lot.) Slavik’s legal complaint doesn’t hold back: “Tesla’s conduct was despicable, and so contemptible that it would be looked down upon and despised by ordinary decent people.”

 

 

 

The many claims of the pending lawsuits come back to a single theme: Tesla consistently inflated consumer expectations and played down the dangers involved. The cars didn’t have sufficient driver monitoring because Musk didn’t want drivers to think that the car needed human supervision. (Musk in April 2019: “If you have a system that’s at or below human-level reliability, then driver monitoring makes sense. But if your system is dramatically better, more reliable than a human, then monitoring does not help much.”) Drivers weren’t warned about problems with automatic braking or “uncommanded lane changes.” The company would admit to the technology’s limitations in the user manual but publish viral videos of a Tesla driving a complicated route with no human intervention.

Musk’s ideal customer was someone like Key — willing to accept the blame when something went wrong but possessing almost limitless faith in the next update. In a deposition, an engineer at Tesla made this all but explicit: “We want to let the customer know that, No. 1, you should have confidence in your vehicle: Everything is working just as it should. And, secondly, the reason for your accident or reason for your incident always falls back on you.”

After our failed left turn in Laguna Beach, Key quickly diagnosed the problem. If only the system had upgraded to F.S.D. 10.69, he argued, the car surely would have managed the turn safely. Unfortunately for Musk, not every Tesla owner is like Dave Key. The plaintiffs in the Autopilot lawsuits might agree that the A.I. is improving, but only on the backs of the early adopters and bystanders who might be killed along the way.

 

 

Link to comment
13 hours ago, dùda said:

Amerika inače sjebava ovu planetu više nego ceo ostali svet zajedno 

Imaju oni svu silu tih gradova koji su neracionalni totalno, nema u čemu nisu. Sve idila , vila, bazen, svoj vrt, ma milina. A onda na to ide nenormalna dužina puteva, instalacija, generalno, količina infrastrukture po glavi stanovnika je strava. Sve je neracionalno. I javni i privatni prevoz, ma bruka koje arčenje resursa.

evo ga, Atlanta, celih 3 stanovika po hektaru :dry:

Atlanta.thumb.JPG.d18a40fa16435d994699b79f71de673c.JPG

a nađoh i ovo

''Održivi urbanizam predstavlja planiranje urbanih sredina kao prepešačivih susedstava , orijentisanih na javni prevoz, sa zgradama visoke performanse, sa infrastrukturom visoke performanse, kompaktnih, sa biofilijom. Umesto uobičajenog stava da veliki gradovi najviše zagađuju (gledano po jedinici površine), predstavljena je analiza emisije ugljen-dioksida po glavi stanovnika, koja pokazuje da su velike raštrkane kuće predgrađa najodgovornije za “kuvanje planete''

Pri demonstriranju dobrih izbora održivog urbanizma treba uvek prikazivati primere konvencionalnog urbanizma i održivog urbanizma: cena 2 petlje autoputa je jednaka ceni 4 milje tramvaja; sa „sprolovskom“ gustinom od 3 stana po akri (0,40ha) porodica pređe u proseku 24000 milja godišnje autom, dok sa 16 stanova po akri (0,40ha) po održivom urbanizmu porodica pređe 9000milja godišnje autom; „sprolovske“ ulice su podređene automobilima (vozi se 50milja/h), dok su ulice održivog urbanizma pripremljene za tramvaje, objekte mešovite namene, radnje (vozi se 30milja/h).

Održivost javnog prevoza-naselja treba da imaju dovoljnu gustinu da podrže javni prevoz –autobusom, trolejbusom, lakom železnicom ili podzemnom železnicom, ustanovljeno je da je neophodna gustina od 7 stanova po akri (17,5st/akri odn 44st/ha) (prosek domaćinstva je 2,5 osobe po domaćinstvu) da bi bio održiv autobuski prevoz ili 15 do 20 stanova po akri (37,5-50st/akri odn 94-125st/ha) da bi bio održiv tramvajski ili trolejbuski prevoz, urbanističko planiranje mora da predviđa javni prevoz i da postavlja uslove gustine stanovanja, javni prevoz treba da omogući dobru vezu sa centrima, pravci infrastrukture treba da prate pravce koridora javnog prevoza.''

 

  • +1 1
Link to comment
On 18.1.2023. at 22:09, mraki said:

Samovozeća vozila mogu dovesti do mnogo bolje optimizovznog i personalizovanog javnog prevoza, koji zaista radi posao. Mogu uštedeti i vreme koje ljudi provode u vožnji. Znam ih dosta koji provode po par sati dnevno za volanom da bi stigli sa tačke A do tačke B i niko ne uživa u tome. Alternativa je loša ili nikakva.

Ako neko može za to vreme spavati, raditi nešto korisno ili zabavno dok auto sam vozi, ja ne vidim nikakav problem.

Mislim, ne vidim problem u samoj ideji, samovozeći auti svakako mogu biti lakše deljivi i bolje iskoristivi od ovoga što imamo danas, ali kako će sve to biti regulisano je već druga priča.

 

Postoje brojni problemi u ideji samovozećeg auta, neki ljudi koji su se bavili tim konceptom pa odustali danas cene da FSD čak ni teoretski nije izvodljiv. Ono kao, previše varijabilni environment, algoritam bi morao da bude toliko složen da sam sebe sputava na nepredvidive načine...

 

Za mene je problem bazičniji - zašto uopšte kalemiti vrlo sofisticirana softverska rešenja na jedan prastar i haotičan sistem kao što je vožnja automobila ulicama grada?

 

Postoje oprobane alternative u vidu javnog saobraćaja, po mogućstvu šinskog, fiksnim i jasno definisanim koridorima. P ako baš mora da nas vozi veštačka inteligencija, bolje da upravlja vozom ili tramvajem nego automobilom.

  • +1 2
Link to comment

potrebno je rešenje koje ne zahteva otvaranje više stotina novih rudnika litijuma, bakra, kobalta i sličnih sirovina, a ta rešenja se na zasnivaju na rešenjima koja daje auto industrija

  • +1 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...