2 min readfrom Machine Learning

[P] AgentGuard – a policy engine + proxy to control what AI agents are allowed to do

I’ve been seeing a trend where AI agents are getting more and more autonomy, running shell commands, calling APIs, even handling sensitive operations.

But most setups I’ve seen have basically no enforcement layer. It’s just “hope the agent behaves.”

So I built a project called AictionGuard.

It sits between the agent and the tools and enforces a policy before anything executes.

Some examples:

  • Block commands like rm -rf * before they run
  • Require approval for things like sudo or production API calls
  • Log every action with reasoning + decision (audit trail)
  • Define everything in a YAML policy file

Right now it’s early (alpha), but:

  • Core policy engine is working
  • HTTP proxy is implemented
  • Python + TypeScript SDKs work

There are still gaps (no persistent DB, some features not wired yet), but the foundation is there, and I'm still working on the gaps, since i built the readme before the project itself.

I’d really appreciate:

  • Feedback on the architecture
  • Ideas for policy rules
  • Contributors interested in AI safety / infra

Repo:
https://github.com/Caua-ferraz/AictionGuard

Curious, if you’re building or using agents, what’s the #1 thing you’d want to restrict or monitor?

submitted by /u/SpecificNo7869
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#rows.com
#natural language processing for spreadsheets
#generative AI for data analysis
#Excel alternatives for data analysis
#no-code spreadsheet solutions
#self-service analytics tools
#business intelligence tools
#collaborative spreadsheet tools
#financial modeling with spreadsheets
#data visualization tools
#data analysis tools
#spreadsheet API integration
#AgentGuard
#AictionGuard
#policy engine
#AI agents
#autonomy
#enforcement layer
#YAML policy file
#HTTP proxy