Agents Can now Clone Themselves and Do Crazy Things (Part I: Deep Stock Analysis)

Agents Can now Clone Themselves and Do Crazy Things (Part I: Deep Stock Analysis)

Most chatbots, such as ChatGPT and Claude, are becoming more powerful every day. They are incorporating more tools, characters and features, such as Canvas or Artifacts, to improve usability. However, especially if you are a heavy user of AI (especially as a non-coder), the limitations are the same: the more data and the more complex the tasks, the less AI becomes usable.

It becomes lazy and takes shortcuts.

  • it hallucinates
  • it forgets things

The quality degrades massively, and worst of all, you still pay for it.

Most of these issues are known limitations that happen because of one of the most limiting factors of AI: the context window. Think of it as the AI's limited working memory: the more data it contains, the more overwhelmed the AI becomes while still trying to please you. The result is a pure waste of time and money.

The solution

There have been a lot of advancements in this area trying to overcome these technical limitations, such as plugging in memories, but one incredibly powerful solution is multi-agency.

The AI breaks down tasks it has never seen before using its reasoning capabilities and sends them to other AIs (so-called subagents) to complete. Then it aggregates the results and answers the user's request.

In this blog post, I will discuss how the multi-agent approach changes everything and how you can try it out for a few pounds using the open-source chatbot CherryStudio.

Disclaimer: I have no affiliation whatsoever with Cherry Studio. This is just one powerful tool that I love to use for many tasks, especially agentic use cases.

Why is this different?

The so-called sub-agent starts with a fresh memory. It doesn't need to know the entire context; it just needs to know the subtask at hand. It executes the task, delivers the results and disappears. Any further subtasks start with a new LLM. This core difference to having one large LLM trying to do everything by itself changes the entire game.

Handling much more complex tasks becomes possible.

  • much less hallucination
  • much higher quality (think of those subagents focusing on one smaller task; they can perform much better than a huge task).

And if you have parallelisation, the end-to-end experience can be much faster than single processing, though this also depends on the tooling of the multi-agent solution.

Let’s talk tools

If you follow the news, you might have heard about Claude Cowork. Built on top of a framework developed by Antropic a few months ago, called Agent SDK, Claude Cowork can process highly complex tasks end-to-end using a high-reasoning, multi-agent approach.

Data Privacy | Imprint