Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnWeb3SquareMore
Trade
Spot
Buy and sell crypto with ease
Margin
Amplify your capital and maximize fund efficiency
Onchain
Going Onchain, without going Onchain!
Convert
Zero fees, no slippage
Explore
Launchhub
Gain the edge early and start winning
Copy
Copy elite trader with one click
Bots
Simple, fast, and reliable AI trading bot
Trade
USDT-M Futures
Futures settled in USDT
USDC-M Futures
Futures settled in USDC
Coin-M Futures
Futures settled in cryptocurrencies
Explore
Futures guide
A beginner-to-advanced journey in futures trading
Futures promotions
Generous rewards await
Overview
A variety of products to grow your assets
Simple Earn
Deposit and withdraw anytime to earn flexible returns with zero risk
On-chain Earn
Earn profits daily without risking principal
Structured Earn
Robust financial innovation to navigate market swings
VIP and Wealth Management
Premium services for smart wealth management
Loans
Flexible borrowing with high fund security
New Reflection 70B AI model aims to reduce hallucinations in LLMs

New Reflection 70B AI model aims to reduce hallucinations in LLMs

GrafaGrafa2024/09/06 04:55
By:Isaac Francis

A new artificial intelligence model named 'Reflection 70B' has been introduced to tackle a common problem with large language models (LLMs): hallucinations. 

HyperWrite AI CEO Matt Shumer announced the launch, claiming it to be "the world’s top open-source model." 

The model utilises a unique training technique called "Reflection-Tuning," which enables it to learn from its mistakes. 

According to Shumer, current AI models often suffer from hallucinations and lack the ability to recognise when they do so. 

He explained, “Current LLMs have a tendency to hallucinate, and can’t recognise when they do so.” 

In contrast, the Reflection 70B model, built on Meta’s Llama 3.1 platform, uses Reflection-Tuning to identify and correct errors before finalising responses. 

Shumer stated that this new approach allows the model to “hold its own” against top closed-source models like Anthropic’s Claude 3.5 Sonnet and OpenAI’s GPT-4 in several benchmarks. 

Reflection-Tuning is a technique that improves AI by making it analyse and learn from its own outputs. 

The model is fed its responses and asked to evaluate them, identifying strengths, weaknesses, and areas for improvement. 

This process is repeated multiple times, allowing the AI to refine its capabilities continuously, becoming more self-aware and better at critiquing its own performance. 

Shumer emphasised the model's potential, stating, “With the right prompting, it’s an absolute beast for many use-cases,” and provided a demo link for users to test the new model. 

Other companies are also working on reducing AI hallucinations. 

In 2023, Microsoft-backed OpenAI suggested “process supervision” as a method to prevent hallucinations by training AI models to reward themselves for each correct step of reasoning, rather than just for the final answer. 

Karl Cobbe, an OpenAI researcher, remarked, “Detecting and mitigating a model’s logical mistakes, or hallucinations, is a critical step towards building aligned AGI [artificial general intelligence].”

0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Locked for new tokens.
APR up to 10%. Always on, always get airdrop.
Lock now!