Tom (Tomáš) Pecher

CS & AI Student | Aspiring Software Developer | Passion for ML and Wolfram

What Is Intelligence, Really?

In my (totally unbiased) opinion, artificial intelligence is the most fascinating field in all of science. Yet ironically, I think that the name "artificial intelligence" fails to capture what is truly interesting about the field. Even though the concept of AI has been around for decades, experts still fundamentally disagree on what artificial intelligence actually means. Personally, I view AI not just as a subfield of computer science, but as a broader study of intelligence itself, both natural and artificial, and specifically how complex, intelligent behavior can emerge from simple rules or systems. This interpretation is deliberately broader than most. It includes neural networks and machine learning, but also biological evolution, physical systems, and so forth (if you can name it, it's intelligent). In my view, this breadth reflects the true essence of AI and cleanly encapsulates all of its innumerable subfields. You might reasonably argue that this definition is too vague. If we follow this logic, could we call almost anything (for example, traffic lights, binoculars, waterfalls) as examples of intelligence? This is a completely fair critique and I welcome you to discuss this with me further if this type of thing interests you. But to define AI properly, we must first define intelligence, and that is where things get difficult.


The Problem with Defining Intelligence

To define intelligence, we would need a way to classify everything in the universe as either "intelligent" or "not intelligent." But making such a clean division is (in my opinion) impossible. Let us take large language models (LLMs) as an example. Most people agree they exhibit some form of intelligence, but fundamentally, they are just massive mathematical functions. So, if these functions are intelligent, are all mathematical functions intelligent? Probably not. Perhaps it is a matter of size or complexity, but then where exactly is the threshold? At what point does a function, or a system, become “intelligent”? Any cutoff point we choose would be arbitrary, a line drawn simply to satisfy our human desire to categorize things neatly. This arbitrarity suggests that our attempts to define intelligence in terms of strict boundaries is not only functionally impossible, but functionally meaningless too.


Intelligence as a Universal Property

I hold the philosophical view of mereological nihilism, which suggests that objects like tables, trees, or even humans do not really “exist” as wholes. Only the smallest physical parts of the universe (like particles or fields) exist, and everything else is just a way of grouping them for our convenience. From this perspective, intelligence is not a property that belongs to specific objects (like people, robots, or animals). Instead, it is a property of the universe as a whole, expressed through different patterns and phenomena. For example, is a large language model intelligent when it answers your question? Is it less intelligent than a human writing an essay, or more intelligent than a tree evolving to survive in a harsh climate? From a universal standpoint, these are all just different manifestations of complex behavior emerging from simple rules. The particles in a waterfall change constantly, yet we still call it “a waterfall.” It is not a fixed object, but a phenomenon. Following this logic, intelligence is not a property of any object, but rather the emergent property of the universe acting on itself.


So, What Counts as Intelligence?

If intelligence is something that can emerge from simple systems, and if there is no clear line between “intelligent” and “non-intelligent”, then we are left with two options:

Personally, I lean toward the first view: intelligence is everywhere. The universe is full of fascinating mysteries and it is our duty as scientists to try to understand them. I believe that this approach to AI and intelligent systems is sensible and practical approach to understanding ourselves and the world around us, and far more interesting perspective than simply trying to shove neural networks into every piece of technology. The very notion and success of neural networks hinges on the fact that there are patterns and relationships that govern how the universe behaves, and by using a flexible function composed of nested linear units, we can train a neural network that approximates this relationship, even if it is arbitrarily complex in reality. This is the underlying (and largely untold) mechanism of intelligence that makes neural networks so powerful and much more interesting that just treating them as a black box.


Why I study AI

As I have hopeful convinced you, artificial intelligence is not just about machines mimicking humans. It is about understanding the very nature of intelligent behavior, how it arises, where it shows up, and whether it even makes sense to draw boundaries around it. Trying to rigidly define intelligence may be more about satisfying human psychology than uncovering objective truth. People often overlook the fact that AI is a study of ourselves just as much as it is a study of computer systems. Neural networks are intelligent agents, but so are humans and all living things, even society as a whole can be considered as a single self-sustaining agent. To me, the most interesting questions are not, "can we do this or that with AI?", eventually we will probably do most things with AI. Rather, we should ask, what does this say about us as individuals, as a species, and how will we be changed because of it? No technological advancement is ever purely technological; people change technology and technology changes people. We will be changed, that is inevitable, but we have the capacity to let AI be a change for better or for worse. And so to finish this thought, I study AI so that this change is a positive one, for all intelligent agents (toasters and fleshlings alike).

AI Experience

Bipedal Walking in Increasingly Treacherous Terrain (2025, RL)

As part of a group project (and our collective introduction to RL), we conducted an experiment in which we implemented a variaty of RL-based methods and set them to train on the OpenAI Gym bipedal walker environment. Specifically, we pretrained the models on the base environment (a flat surface) and then tested the best performing models on the "hardcore" environment (a surface with random bumps and holes). This is a common RL problem and is widely considered to be quite difficult. Never the less we managed to train a Soft Actor-Critic (SAC) and a SUNRISE agent to solve the hardcore environment (reach 300+ reward). More impressive however, the SUNRISE agent managed to converge to this optimal strategy five times faster than existing models we could find. This project was great fun and has made me go down an RL rabbit hole that I am still exploring in my individual project. Many thanks to my group members for their hard work and dedication that made this project possible (they are all great programmers and great people so check out their LinkedIns here: Marilyn D'Costa, Ptolemy Morris, Dhru Randeria, George Rawlinson).

Voice-based Action Selection (2024, NLP)

As part of the the VIP study "Creating immersive training experiences in VR", we intended to create a VR simulation for training users to be "effective bystanders" when witnessing sexual harassment. As lead developer, my role was to create a system that would classify the user's speech towards the perpetrator into one of a set of predetermined actions (such as distract the perpetrator etc.). Using Tensorflow, I implemented and fine-tuned an LSTM model that managed to reach 97% accuracy on test data. The system is currently in the testing phase and I hope my contribution will help the study reach its goals.

Hypercustomizable Subtitling System (2024, NLP)

As part of the "Experimental Systems Project" group module, we created a system that generates subtitles for any video in real-time. A key goal with our system was to ensure that the underlying model would be able to perform even in noisy scenarios where the audio quality was poor. We achieved this by dynamically switching between models to best adapt to the audio conditions. The system was a success, and we were able to demonstrate it to our peers and lecturers.

Current Work

Building a Robust and Scalable Traffic Control System using Reinforcement Learning (2025, RL)

As my third-year individual project, I am working on creating an RL-based traffic control system that can dynamically adapt to traffic conditions and (hopefully) outperform simple actuated systems. A decent amount of research has been dedicated to this area, however most systems struggle with robustness (exhibit unpredictible behaviour) and scalability (struggle responding to new scenarios). This project aims to marry these two ideas in such a way that would make RL-based traffic control systems not only viable but preferable over existing fixed-control and actuated systems.

Future Plans

When I am less busy, I hope to delve into some of these potential avenues:

Glossary