This AI Research from China Provides Empirical Evidence on the Relationship between Compression and Intelligence

This AI Research from China Provides Empirical Evidence on the Relationship between Compression and Intelligence


Many people think that intelligence and compression go hand in hand, and some experts even go so far as to say that the two are essentially the same. Recent developments in LLMs and their effects on AI make this idea much more appealing, prompting researchers to look at language modeling through the compression lens. Theoretically, compression allows for converting any prediction model into a lossless compressor and inversely. Since LLMs have proven themselves to be quite effective in compressing data, language modeling might be thought of as a type of compression.

For the present LLM-based AI paradigm, this makes the case that compression leads to intelligence all the more compelling. However, there is still a dearth of data demonstrating a causal link between compression and intelligence, even though this has been the subject of much theoretical debate. Is it a sign of intelligence if a language model can encode a text corpus with fewer bits in a lossless manner? That is the question that a groundbreaking new study by Tencent and The Hong Kong University of Science and Technology aims to address empirically. Their study takes a pragmatic approach to the concept of “intelligence,” concentrating on the model’s capability to do different downstream tasks rather than straying into philosophical or even contradictory ground. Three main abilities—knowledge and common sense, coding, and mathematical reasoning—are used to test intelligence.

To be more precise, the team tested the efficacy of different LLMs in compressing external raw corpora in the relevant domain (e.g., GitHub code for coding skills). Then, they use the average benchmark scores to determine the domain-specific intelligence of these models and test them on various downstream tasks. 

Researchers establish an astonishing result based on studies with 30 public LLMs and 12 different benchmarks: the downstream ability of LLMs is roughly linearly related to their compression efficiency, with a Pearson correlation coefficient of about -0.95 for each assessed intelligence domain. Importantly, the linear link also holds true for most individual benchmarks. In the same model series, where the model checkpoints share most configurations, including model designs, tokenizers, and data, there have been recent and parallel investigations on the relationship between benchmark scores and compression-equivalent metrics like validation loss.

Regardless of the model size, tokenizer, context window duration, or pre training data distribution, this study is the first to show that intelligence in LLMs correlates linearly with compression. The research supports the age-old theory that higher-quality compression signifies higher intelligence by demonstrating a universal principle of a linear association between the two. Compression efficiency is a useful unsupervised parameter for LLMs since it allows for easy updating of text corpora to prevent overfitting and test contamination. Because of its linear correlation with the models’ abilities, compression efficiency is a stable, versatile, and trustworthy metric that our results support for assessing LLMs. To make it easy for academics in the future to gather and update their compression corpora, the team has made their data collecting and processing pipelines open source. 

The researchers highlight a few caveats to our study. To begin, fine-tuned models are not suitable as general-purpose text compressors, so they restrict their attention to base models. Nevertheless, they argue that there are intriguing connections between the compression efficiency of the basic model and the benchmark scores of the related improved models that need to be investigated further. Furthermore, it’s possible that this study’s results only work for fully trained models and don’t apply to LMs because the assessed abilities haven’t even surfaced. The team’s work opens up exciting avenues for future research, inspiring the research community to delve deeper into these issues. 

Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 40k+ ML SubReddit

For Content Partnership, Please Fill Out This Form Here..

Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others…



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest