He's a good adventure dude.
595 stories
·
7 followers

We need to tell people ChatGPT will lie to them, not debate linguistics

2 Comments and 5 Shares

ChatGPT lies to people. This is a serious bug that has so far resisted all attempts at a fix. We need to prioritize helping people understand this, not debating the most precise terminology to use to describe it.

We accidentally invented computers that can lie to us

I tweeted (and tooted) this:

Mainly I was trying to be pithy and amusing, but this thought was inspired by reading Sam Bowman's excellent review of the field, Eight Things to Know about Large Language Models. In particular this:

More capable models can better recognize the specific circumstances under which they are trained. Because of this, they are more likely to learn to act as expected in precisely those circumstances while behaving competently but unexpectedly in others. This can surface in the form of problems that Perez et al. (2022) call sycophancy, where a model answers subjective questions in a way that flatters their user’s stated beliefs, and sandbagging, where models are more likely to endorse common misconceptions when their user appears to be less educated.

Sycophancy and sandbagging are my two favourite new pieces of AI terminology!

What I find fascinating about this is that these extremely problematic behaviours are not the system working as intended: they are bugs! And we haven't yet found a reliable way to fix them.

(Here's the paper that snippet references: Discovering Language Model Behaviors with Model-Written Evaluations from December 2022.)

"But a machine can't deliberately tell a lie"

I got quite a few replies complaining that it's inappropriate to refer to LLMs as "lying", because to do so anthropomorphizes them and implies a level of intent which isn't possible.

I completely agree that anthropomorphism is bad: these models are fancy matrix arithmetic, not entities with intent and opinions.

But in this case, I think the visceral clarity of being able to say "ChatGPT will lie to you" is a worthwhile trade.

Science fiction has been presenting us with a model of "artificial intelligence" for decades. It's firmly baked into our culture that an "AI" is an all-knowing computer, incapable of lying and able to answer any question with pin-point accuracy.

Large language models like ChatGPT, on first encounter, seem to fit that bill. They appear astonishingly capable, and their command of human language can make them seem like a genuine intelligence, at least at first glance.

But the more time you spend with them, the more that illusion starts to fall apart.

They fail spectacularly when prompted with logic puzzles, or basic arithmetic, or when asked to produce citations or link to sources for the information they present.

Most concerningly, they hallucinate or confabulate: they make things up! My favourite example of this remains their ability to entirely imagine the content of a URL. I still see this catching people out every day. It's remarkably convincing.

Why ChatGPT and Bing Chat are so good at making things up is an excellent in-depth exploration of this issue from Benj Edwards at Ars Technica.

We need to explain this in straight-forward terms

We're trying to solve two problems here:

  1. ChatGPT cannot be trusted to provide factual information. It has a very real risk of making things up, and if people don't understand it they are guaranteed to be mislead.
  2. Systems like ChatGPT are not sentient, or even intelligent systems. They do not have opinions, or feelings, or a sense of self. We must resist the temptation to anthropomorphize them.

I believe that the most direct form of harm caused by LLMs today is the way they mislead their users. The first problem needs to take precedence.

It is vitally important that new users understand that these tools cannot be trusted to provide factual answers. We need to help people get there as quickly as possible.

Which of these two messages do you think is more effective?

ChatGPT will lie to you

Or

ChatGPT doesn't lie, lying is too human and implies intent. It hallucinates. Actually no, hallucination still implies human-like thought. It confabulates. That's a term used in psychiatry to describe when someone replaces a gap in one's memory by a falsification that one believes to be true - though of course these things don't have human minds so even confabulation is unnecessarily anthropomorphic. I hope you've enjoyed this linguistic detour!

Let's go with the first one. We should be shouting this message from the rooftops: ChatGPT will lie to you.

That doesn't mean it's not useful - it can be astonishingly useful, for all kinds of purposes... but seeking truthful, factual answers is very much not one of them. And everyone needs to understand that.

Convincing people that these aren't a sentient AI out of a science fiction story can come later. Once people understand their flaws this should be an easier argument to make!

Should we warn people off or help them on?

This situation raises an ethical conundrum: if these tools can't be trusted, and people are demonstrably falling for their traps, should we encourage people not to use them at all, or even campaign to have them banned?

Every day I personally find new problems that I can solve more effectively with the help of large language models. Some recent examples from just the last few weeks:

Each of these represents a problem I could have solved without ChatGPT... but at a time cost that would have been prohibitively expensive, to the point that I wouldn't have bothered.

I wrote more about this in AI-enhanced development makes me more ambitious with my projects.

Honestly, at this point using ChatGPT in the way that I do feels like a massively unfair competitive advantage. I'm not worried about AI taking people's jobs: I'm worried about the impact of AI-enhanced developers like myself.

It genuinely feels unethical for me not to help other people learn to use these tools as effectively as possible. I want everyone to be able to do what I can do with them, as safely and responsibly as possible.

I think the message we should be emphasizing is this:

These are incredibly powerful tools. They are far harder to use effectively than they first appear. Invest the effort, but approach with caution: we accidentally invented computers that can lie to us and we can't figure out how to make them stop.

There's a time for linguistics, and there's a time for grabbing the general public by the shoulders and shouting "It lies! The computer lies to you! Don't trust anything it says!"

Read the whole story
kleer001
237 days ago
reply
They don't lie any more than a pen or a paintbrush, a typewriter or a stone. It's a goddamn tool. Don't give a tool agency by expecting truth from it. This pearl clutching is nauseating.
Share this story
Delete
1 public comment
mareino
235 days ago
reply
I get that "AI lies" is a concise message. Let me offer a better message: "AI bullshits." Why? Because liars know what the truth is. Bullshitters don't. A liar has an ulterior motive to deceive. A bullshitter really hopes they got the right answer. It also points to a solution: AI telling the user how confident it is about any given sentence. If AI could own up to its bullshit, well, maybe we would create something better than mankind.
Washington, District of Columbia

Musk admits NPR isn’t state-affiliated after asking questions he could have Googled

1 Comment and 2 Shares
NPR's Twitter profile with the newly applied

Enlarge / NPR's Twitter profile as of April 9, 2023.

When Elon Musk slapped NPR's Twitter account with a "US state-affiliated media" label last week, it quickly became clear he didn't know much about how NPR operates or how it's funded. After admitting the state-affiliated label was wrong, Musk changed NPR's tag yesterday to "Government Funded Media"—even though NPR gets less than 1 percent of its annual funding directly from the US government.

The state-affiliated tag took NPR and many others by surprise, in part because it contradicted Twitter's own policy that cited NPR and the BBC as examples of state-financed media organizations that retain editorial independence. Twitter has historically applied its state-affiliated tag to state-controlled news organizations like Russia's RT and China's Xinhua.

Twitter changed its policy to remove the reference to editorial independence at NPR and the BBC, but didn't scrub the old language from another Twitter help page that still describes both NPR and the BBC as editorially independent. The BBC's main Twitter account is also newly labeled as "Government Funded Media" after previously having no label.

In emails with NPR reporter Bobby Allyn, Musk asked basic questions that he could have found answers to with a quick Internet search. "He didn't seem to understand the difference between public media and state-controlled media," Allyn said Friday in an interview with Mary Louise Kelly on the show All Things Considered.

Allyn continued:

He asked me at one point, quote, "what's the breakdown of NPR's annual funding?" And he asked, "who appoints leadership at NPR?" These are questions you can get by Googling, but for some reason he wanted to ask me. And also, let's take a moment and pause on these questions, Mary Louise, because he made a major policy decision, right? And after doing so, he is just now asking for the basic facts. This is not exactly how most CEOs in America operate. Anyway, I answered his questions. About 1 percent of NPR's budget is from federal grants, and an independent board appoints NPR's CEO, who picks leadership.

Musk: Label “might not be accurate”

Musk could have gotten the NPR funding information from this NPR page, which says, "On average, less than 1 percent of NPR's annual operating budget comes in the form of grants from CPB [Corporation for Public Broadcasting] and federal agencies and departments."

Corporate sponsorships are the top contributor to NPR funding, accounting for 39 percent of average annual revenue between 2018 and 2022. NPR gets another 31 percent of its funding in programming fees from member organizations. Federal funding indirectly contributes to the latter category because the publicly funded CPB provides annual grants to public radio stations that pay NPR for programming.

Musk's emails were further detailed in an article by Allyn. After Allyn told Musk that NPR gets only 1 percent of its money from the government, Musk replied, "Well, then we should fix it."

"The operating principle at new Twitter is simply fair and equal treatment, so if we label non-US accounts as govt, then we should do the same for US, but it sounds like that might not be accurate here," Musk wrote in another email to Allyn.

NPR's current government-funded label links to Twitter's policy, which includes the Twitter's definition of state-affiliated media accounts but doesn't provide a definition of government-funded.

Ex-Twitter exec explains pre-Musk labeling

Allyn's article quoted a former Twitter executive who helped develop the state-affiliation labels. The executive "said that editorial independence had long been the deciding factor in whether to issue the designation." The article continued:

The People's Daily in China, and Sputnik and RT in Russia, for instance, received the labels, but outlets with editorial autonomy that received some government funding did not.

"In the end, [we] felt that the most fair and balanced way to implement labels was to call out state connections that had a demonstrated track record of influencing content of news reporting," the former Twitter executive said.

That meant that NPR, the government-funded outlet Voice of America, "and even Al Jazeera didn't qualify under our designation," the former employee said. The point of the labels, the former executive said, was to help users understand what they're seeing on the platform.

Al Jazeera's Twitter accounts are not labeled as either state-affiliated or government-funded. Twitter added a government-funded label to the US-owned Voice of America's Twitter account sometime this weekend.

We contacted NPR about the new "Government Funded Media" label and will update this article if we get a response. NPR has stopped posting tweets since getting the state-affiliated tag, and updated its bio to read, "NPR is an independent news organization committed to informing the public about the world around us. You can find us every other place you read the news."

Twitter no longer a “credible platform”

KCRW, an NPR member station in Santa Monica, California, emailed listeners to tell them that KCRW will no longer post on Twitter from its main accounts. KCRW President Jennifer Ferro noted that the state-affiliated media tag is "a term the platform applies to propaganda outlets in countries without a free press, a guaranteed right in the United States."

"There is a chance that Twitter will remove the label from NPR. Even so, we no longer have confidence that Twitter is a credible platform," the email said.

PEN America, a 100-year-old nonprofit that advocates for free expression through literature, criticized Twitter for labeling NPR as state-affiliated media.

"Twitter has inexplicably added a warning to NPR's Twitter account, labeling the venerated news outlet as state sponsored media, on par with Russia Today and other mouthpieces for authoritarian regimes," the group said. PEN America pointed to Twitter's definition of state-affiliated media as "outlets where the state exercises control over editorial content through financial resources, direct or indirect political pressures, and/or control over production and distribution."

"That is unquestionably not NPR, which assiduously maintains editorial independence... the US government exercises no editorial control over NPR whatsoever," the group said.

Read Comments

Read the whole story
kleer001
237 days ago
reply
Good, burn twitter to the ground. Mass media should not be free anonymously.
Share this story
Delete

ChatGPT is making up fake Guardian articles. Here’s how we’re responding | Chris Moran

1 Comment and 2 Shares

The risks inherent in the technology, plus the speed of its take-up, demonstrate why it’s so vital that we keep track of it

  • Chris Moran is the Guardian’s head of editorial innovation

Last month one of our journalists received an interesting email. A researcher had come across mention of a Guardian article, written by the journalist on a specific subject from a few years before. But the piece was proving elusive on our website and in search. Had the headline perhaps been changed since it was launched? Had it been removed intentionally from the website because of a problem we’d identified? Or had we been forced to take it down by the subject of the piece through legal means?

The reporter couldn’t remember writing the specific piece, but the headline certainly sounded like something they would have written. It was a subject they were identified with and had a record of covering. Worried that there may have been some mistake at our end, they asked colleagues to go back through our systems to track it down. Despite the detailed records we keep of all our content, and especially around deletions or legal issues, they could find no trace of its existence.

Continue reading...
Read the whole story
kleer001
238 days ago
reply
Why are people accusing ChatGPT of saying true things? That's not how it works.
Share this story
Delete

'Slavery was wrong' among things teachers can't say anymore - The Washington Post

1 Comment and 2 Shares
Read the whole story
kleer001
265 days ago
reply
Wouldn't it be the slavery "is" wrong seeing that there's still more than a bit going on?
Share this story
Delete

Autoregressive long-context music generation with Perceiver AR

1 Share

We present our work on music generation with Perceiver AR, an autoregressive architecture that is able to generate high-quality samples as long as 65k tokens—the equivalent of minutes of music, or entire pieces!

🎵Music Samples 📝ICML Paper GitHub Code DeepMind Blog

The playlist above contains samples generated by a Perceiver AR model trained on 10,000 hours of symbolic piano music (and synthesized with Fluidsynth).

Introduction

Transformer-based architectures have been recently used to generate outputs from various modalities—text, images, music—in an autoregressive fashion. However, their compute requirements scale poorly with the input size, which makes modeling very long sequences computationally infeasible. This severely limits models’ abilities in settings where long-range context is useful for capturing domain-specific properties. Music domains offer a perfect testbed, since they often exhibit long-term dependencies, repeating sequences and overall coherence over entire minutes—all necessary ingredients for producing realistic samples that are pleasing to the human ear!

Transformer vs. Perceiver AR

To ameliorate these issues, we propose Perceiver AR, an autoregressive version of the original Perceiver architecture. A Perceiver model maps the input to a fixed-size latent space, where all further processing takes place. This enables scaling up to inputs of over 100k tokens! Perceiver AR builds on the initial Perceiver architecture by adding causal masking. This allows us to autoregressively generate music samples of high quality and end-to-end consistency, additionally achieving state-of-the-art performance on the MAESTRO dataset.

Setup

Perceiver AR model architecture

Perceiver AR first maps the inputs (in the diagram, [P,e,r,c,e,i,v,e,r,A,R]) to a fixed-size latent array, via a single cross-attention operation. These latents (3 illustrated above) then interact in a deep stack of self-attention layers to produce estimates for each target. The most recent inputs ([r,A,R]) correspond to queries, and each latent corresponds to a different target position ({1: A, 2: R, 3: <EOS>}).

Causal masking is used in both kinds of attention operations, to maintain end-to-end autoregressive ordering. Each latent can therefore only attend to (a) itself and (b) latents corresponding to ‘earlier’ information (either input tokens or target positions). This respects the standard autoregressive formulation, where the probability distribution for the t-th output is only conditioned on what was generated at previous timesteps 1, ..., t-1.

In the music domain, we use up to 65k-token inputs, which corresponds to several minutes in the symbolic domain and one minute in the raw audio domain.

Symbolic music

The playlist at the top showcases 8 unconditional samples. These were generated by a model that was trained on 10,000 hours of transcribed YouTube piano performances containing examples between 1k and 32k tokens in length. The model had 1024 latents and 24 self-attention layers. Training on this large-scale dataset yields high-quality samples with stylistic and structural coherence—one can identify repeating musical themes, different chord progressions, arpeggios and even ritardandos. Moreover, the main difference from our previous model trained on YouTube piano performances is that a 32k input size was feasible this time, so we only used full-length pieces for training! This allowed Perceiver AR to better model entire pieces with beginning, middle and end sections.

Next, we present audio samples from the symbolic domain, obtained by training on MAESTRO v3. The input representation in both cases was computed from MIDI files as described by Huang et al. in Section A.2, and the final outputs were synthesized using Fluidsynth.

Raw audio

Perceiver AR can also be used to generate samples from the raw audio domain. Here, we applied the SoundStream codec to MAESTRO v3 .wav files to encode the raw audio. After training the model, we generated samples and decoded them into the source domain. Keeping the context length fixed, we experimented with 3 different codec bitrates—12kbps, 18kbps, 22kbps—which, for an input length of 65k tokens, span 54.4s, 36.8s and 29.6s of music, respectively. The examples below illustrate the trade-off between sample duration and fidelity: codecs with lower bitrates model coarser structure and enable training on a longer period of time, but sacrifice audio quality.

12kbps 18kbps 22kbps

You can listen to more raw audio samples here 🎵.

Bonus

To end on a high note (🙃), we invite you to enjoy Charlie Chen’s creation - a music box that plays Perceiver AR outputs, adding an immensely nostalgic feel to the generated music!

@inproceedings{
  hawthorne2022general,
  title={General-purpose, long-context autoregressive modeling with Perceiver AR},
  author={Hawthorne, Curtis and Jaegle, Andrew and Cangea, C{\u{a}}t{\u{a}}lina and Borgeaud, Sebastian and Nash, Charlie and Malinowski, Mateusz and Dieleman, Sander and Vinyals, Oriol and Botvinick, Matthew and Simon, Ian and others},
  booktitle={The Thirty-ninth International Conference on Machine Learning},
  year={2022},
  url={https://arxiv.org/abs/2202.07765}
}
Read the whole story
kleer001
535 days ago
reply
Share this story
Delete

Coinbase Leads Users Astray By Recommending Everything Besides Bitcoin

1 Comment

Coinbase capitalizes on the altcoin craze to profit off users. Their “Top 10 Picks'' omits bitcoin and everything else on the list has performed poorly.

The below is a direct excerpt of Marty's Bent Issue #1212: “Save a friend, tell them to get out of the Coinbase casino. Sign up for the newsletter here.

(Source)

You'll often hear “Bitcoin maximalists” derided for being anti-free market when cautioning newcomers to stay away from altcoins and the exchanges that push them. Those snake oil salesmen who hiss at Bitcoiners often say that they are simply afraid of competition and don't want to admit that “Bitcoin has stagnated” and “the devs have gone elsewhere.” In reality, many Bitcoiners warn newcomers to stay away from shitcoins and the casinos that list them for trading because they have seen hoards of people led to slaughter by the siren calls of opportunists who care not about human freedom, sound money or decentralization, but being able to make as much money as possible. No matter how unethically it is acquired.

I highly recommend you freaks — especially any of you who have fallen prey to the siren calls of “a better Bitcoin” — to read through this thread from Sam Callahan, which dives into the overtly predatory tactics of Coinbase and their penchant for listing pre-mined altcoins that are utter trash and get auto-dumped on an unsuspecting retail market. Not only that, but Coinbase tends to hide bitcoin deep in the app so their customers overlook it or simply never find it. They are much more incentivized to siphon off fees from shitcoin trading than actually educating individuals about bitcoin and helping them acquire as much as possible.

I would call it a shame, but it's really worse than that. It's quite disgusting actually and Coinbase and its backers should be utterly ashamed of themselves for engaging in this type of bucket shop activity. A once somewhat respectable brand has completely turned itself into a contemptible bad actor that should be avoided at all costs.

Save yourself, your family and friends. Get your bitcoin off Coinbase and advise your network to do the same.



Read the whole story
kleer001
559 days ago
reply
That's not what a company I own shares in to do, that's dumb and gross.
Share this story
Delete
Next Page of Stories