Rational Agents 2

Alex Zeester
2 min readMar 1, 2021

In my previous post, I wrote a brief introduction about rational agents. To remind the readers again, a rational agent takes the best possible action in a given situation.

Agents are not isolated, they can perceive their environments through sensors and can act upon that environment through actuators. There can be 3 types of agents: human, robotic, or software agents.

The environment can be everything that affects the agent’s percept and actions. Percept represents everything that the agent perceives through its sensors in that particular environment. Theoretically, an agent cannot decide on something it has not perceived before or it is not a part of its built-in knowledge, stored in a percept sequence. Additionally, the course of action will be mapped by an agent function based on the percept sequence. Thus, it will provide some insights regarding the agent’s behavior and, internally, this will be implemented by an agent program.

Rationality depends on 4 different criteria:

  • Performance measure defines the criteria for success
  • The agent’s prior knowledge of the environment
  • The actions that the agent can perform
  • The agent’s percept history

Good or bad agent?

In AI, doing the “right thing” can be evaluated based on the consequentialism. Consequentialism refers to reviewing actions in terms of the results of its consequences. A performance measure will capture if the result of an action taken by the agent generates good results and, in that case, he agent has done well.

The challenge of building computer or robotic agents is that they cannot define for themselves their own performance measures. Humans have to do it for them at the moment. This is also another reason why, sometimes, we are being bombarded by articles stating that AI can pose a serious threat in the future. By defining wrong, conflicting or destructive performance measures or agent goals by a limited group of people, this may cause serious damage to the humankind. However, this discussion is beyond the scope of the current blog post.

In general, the performance measure needs to be related to what needs to be achieved in that particular environment, rather than how we would like the agent to behave.

In the end, an agent is rational, if it selects a course of action that it is expected to maximize its performance measure based on its percept history and built-in knowledge.

--

--

Alex Zeester

Passionate about artificial intelligence, start-ups and tech