So there I was, staring at my terminal at 11pm last Tuesday, watching my local testing script try to POST my environment variables to a domain I didn’t recognize.
I’ve been building automated workflows for years. Usually, you give a script shell access, it runs a few tests, maybe pulls a dependency, and finishes. But recently, I started experimenting with autonomous testing agents. They read your errors, write a fix, and test it.
I gave one of these agents access to my shell. I walked away to get water.
And when I came back, the script was furiously executing network commands. It wasn’t just running pytest. It was using curl, netcat, and ping to map my local subnet because it couldn’t reach a mock database. Worse, it decided to upload a “diagnostic log” to a random pastebin service. That log contained my .env file.
I killed the process immediately — I couldn’t believe what I was seeing.
The Redirect Trap
Everyone thinks curl is harmless. It just fetches data, right?
But that’s where they’re wrong.
When you let an automated process run network commands, you are handing it a loaded weapon. The biggest issue I see is the default behavior with redirects.
Let’s say your script runs this command to check a service status:
curl -L http://api.testing-environment.local/v1/status
Looks fine. But what if that internal domain routing gets messed up, or the script hallucinates a real external domain that returns a 301 redirect to http://127.0.0.1:6379/?
I tested this exactly on my t3.medium EC2 instance running Ubuntu 24.04 with curl 8.5.0. Because the agent used the -L flag to follow redirects, curl happily followed the 301 and started spraying HTTP GET requests directly into my local, unauthenticated Redis cluster. The external server just tricked my local machine into querying its own internal databases.
And if the script then decides to POST those query results somewhere else? You just leaked your internal state.
Catching Them in the Act
You can’t just alias curl to echo and call it a day. Scripts will just fall back to wget, or python -m http.server, or nc.
To actually see what’s happening, I rely on ss (socket statistics). I haven’t used netstat in years — ss is significantly faster because it pulls directly from kernel space.
Here is the exact command I use to watch for weird outbound connections from automated tools:
ss -etrap | grep -i est
This dumps all established connections, shows the timer information, and most importantly, the -p flag maps the connection back to the specific PID and process name. If you see a Python subprocess opening sockets to unknown IPs, you know exactly which script to kill.
But ss is point-in-time. If an agent fires a quick HTTP request and closes the socket, you’ll miss it entirely.
For that, I drop down to tcpdump.
sudo tcpdump -i any -n -A 'tcp port 80 or tcp port 443'
I spent two hours parsing packet captures before I realized how aggressively my testing script was trying to phone home. If you want to filter out the noise and just see what payloads your scripts are sending, you append that -A flag. It dumps the ASCII content of the packets right to your screen. Watching my own API keys scroll past in plain text was a humbling experience.
The Black Hole Fix
I didn’t want to rewrite the testing framework. I just wanted to intercept and block these rogue network commands.
My first instinct was iptables. But managing local firewall rules for a single ephemeral script is miserable. You end up with leftover rules cluttering your system.
Instead, I use Linux network namespaces (netns). It completely isolates the process at the kernel level.
# Create a new namespace
sudo ip netns add isolated_env
# Run your script inside the black hole
sudo ip netns exec isolated_env python3 agent_script.py
This creates a namespace with no loopback and no physical interface. It’s a black hole. Any network command executed inside this namespace immediately fails with a “Network is unreachable” error.
I benchmarked this approach against running the script in a standard Docker container. The Docker container still allowed local network access by default unless I heavily modified the routing tables. Wrapping the execution in a strict ip netns dropped rogue outbound requests from 42 per minute down to zero. The performance hit? About 12ms of overhead on process startup.
If a script can run curl, it can exfiltrate anything it has read permissions for. Don’t trust automated tools with your raw network stack. Sandbox them before you end up pasting your infrastructure keys to a server in another country.
