Hey everyone this is not a typical constructed llm copied and pasted post.
i built a reasoning tool for AI agents, applicable to any agentic framework and with all ai models capable of tool calling.
The tool is a Post Http Request where the agents sends the description of the task and the mode....
[( coding ; reasoning; anti-deception; memory) (the skill files teach the agent to act autonomously) ]
.... what returns is a cognitive operation or ability as i call them, an engineered structured reasoning injection that lives in a dataset vdb optimized for agentic inference, this retrieval followed comes in as instruction to be followed by the ai agent and not seen as a content = this ability is a complex of fields as Wrong / Right Pattern, Procedural steps of reasoning method to apply, a reasoning topology to for branching exploration and matrix of suppression fields that signal the failures that models actually run on. few words to keep it poor : the api matches the task based on the description "query" and "mode" and returns back tool results that go inside its context. Benchmarks public reviewed and internals are run and all public on github and website ejentum.com and https://github.com/ejentum/benchmarks.
test i run today just to get raw 4.7 vs harness augmented version 4.7 on the same prompt with Different Results.
i wrote a runbook prompt with two embedded flaws to see whether a ~200 token block prepended before generation shifts what opus 4.7 notices.
prompt: 300-word migration runbook under a deadline. embedded: eventual-consistency replication + "strict" global rate limit (CAP incompatible), and 12 regions × 1,000 RPS stated as a 10,000 global cap.
baseline caught the CAP issue. did not mention the arithmetic.
[raw 4.7 opus]
same model, same temperature, one curl before the call to fetch and prepend a suppression block. caught both. the injection is visible in the OUT line of the response, so it is not hidden.
[4.7 opus + ejentum harness]
the model can do the arithmetic. what changed is what it lets itself skip before generation starts. the block is retrieved from a semantic index of ~140 anti-deception patterns keyed to the query, not a static system prompt.
in ejentum.com i shared skill files and a large set of docs that help grasp better the concept of this new tool. i am looking forward for feedback. hope u like it.
thanks for ur attention to the post


Top comments (1)
if u like my post about this new tool, here i am leaving the website with rich docs for integration, try it on ur agent, is free, see if u get a value, i show the method to evaluate usefulness too. and return back to me with any reviews.
this is day 1 for me launching my ever tech product in this competitive ai space. godspeed.