<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Alejandro Wainzinger Blog]]></title><description><![CDATA[Alejandro Wainzinger Blog]]></description><link>https://blog.alejandrowainzinger.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 16:34:13 GMT</lastBuildDate><atom:link href="https://blog.alejandrowainzinger.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Agentic Abuse at Tech Jobs]]></title><description><![CDATA[The classic anti-patterns at tech jobs are back, newly evolved for the early agentic AI era. As the machines get smarter, the humans are doubling down on bad strategies that never worked before and wo]]></description><link>https://blog.alejandrowainzinger.com/agentic-abuse-at-tech-jobs</link><guid isPermaLink="true">https://blog.alejandrowainzinger.com/agentic-abuse-at-tech-jobs</guid><category><![CDATA[Career]]></category><category><![CDATA[jobs]]></category><dc:creator><![CDATA[Alejandro Wainzinger]]></dc:creator><pubDate>Thu, 12 Mar 2026 23:19:44 GMT</pubDate><content:encoded><![CDATA[<p>The classic anti-patterns at tech jobs are back, newly evolved for the early agentic AI era. As the machines get smarter, the humans are doubling down on bad strategies that never worked before and work even worse now (e.g. lines of code as a productivity metric). The industry already tried forcibly introducing LLMs and AI usage at work to generally abysmal results in 2025, but it seems like the theme of 2026 will go one further with forcing the use of agents with no direction as a magical panacea. All of which is a shame because agentic coding can genuinely be a gamechanger when used properly, but the irrational drive to introduce AI at all costs in a measurable way to show gains in output has led to the old toxic metrics rearing their ugly heads again, and companies and especially employees are suffering the consequences.</p>
<p>So why is this happening, and what, if anything, can be done to fix this? (NOTE: skip to the "Enter Management" section if familiar with how we got to today)</p>
<h2><strong>Industry Background</strong></h2>
<h3><strong>Pre-Pandemic Era</strong></h3>
<p>Rewind to before LLMs were on the radar, and the industry was <a href="https://medium.com/@xevix/what-to-do-when-tech-jobs-go-bad-93e631a1bdc9">already problematic</a>. Meaningless or misused metrics, misaligned and capricious annual reviews, corporate politics over logical planning, tech debt that's rarely addressed, dysfunctional communication, bad or no roadmaps, <a href="https://medium.com/@xevix/why-story-points-dont-work-5f10c5d5a0f0">story points</a> and Agile, no clear path for career growth, cult-like corporate propaganda, broken interview processes, and insufficient or cynical support for employee well-being. By any measure, the industry left a lot to be desired.</p>
<p>When new tech came around like Docker, Kubernetes, serverless functions, or anything cloud-related, management would parrot the latest tech and want you to apply it everywhere, with no context and little room for discussion. Until they heard about the next thing, then they'd jump onto that and demand that instead. Are you doing zero trust? Did you lock every service down? Meanwhile they're using unencrypted email to pass around sensitive data and CC'ing 200 people.</p>
<p>The metrics like lines of code, pull request count, issues closed, story points "burned" and 100% code coverage gave an easy numeric way to evaluate people, but these rarely correlate to being a good engineer. You could measure number of issues opened by 3rd party teams, survey the happiness of your users, the number of daily active users, or things that actually indicate progress, which even though flawed, are a better numerical support of success, but review systems are built less to lift up employees and more to keep some percentage down in order to keep costs (i.e. your compensation) down while having the semblance of fairness.</p>
<p>Then in the office, the appearance of someone visibly sitting in a chair for 8 hours a day or staying after hours, was a simple lazy shorthand for identifying a hard worker. This would come to evolve in the following years.</p>
<h3><strong>Pandemic Era</strong></h3>
<p><a href="https://medium.com/@xevix/remote-work-after-the-pandemic-73c1a5e89fa8">Remote work</a> is suddenly the norm. Arbitrary metrics are still used, but many people being more productive working from home are breaking the plot of magical in-person collaboration and the need for tons of meetings and managers. The entire industry is turned on its head. The ability to hire people remotely from virtually anywhere increases the hiring pool but also the competition for the talent. Massive amounts of hiring happens. Salaries balloon. Some people burn out, but others thrive.</p>
<h3><strong>Post-Pandemic Era</strong></h3>
<p>Companies demand workers to return to the office, getting rid of remote employees. The boom times of the pandemic consumption and low interest rates and cheap credit end, and companies that overhired execute mass layoffs. The hiring process goes from favoring candidates to favoring companies. Positions dry up. Existing employees are given bad reviews to suppress salaries, with the leverage of a bad time interviewing across the industry held over people's heads. The good times turn bad, and whoever is left now has 10x the work they previously had.</p>
<h3><strong>LLMs Era</strong></h3>
<p>It is in this world that suddenly, ChatGPT drops out of nowhere. At first, there were severe limitations: math was inaccurate, significant hallucinations, and LLMs were effectively used as glorified chatbots. Amazing compared to what was there before, yes, but a far cry from useful for programming.</p>
<p>Then came the improvements in the models. They got smarter, hallucinated less, performed better on benchmarks, were cheaper to use. More companies developed competing models and products. Open weights models became runnable on local machines with humble hardware setups. Copilot, Cursor and others began to help with code completion, which while limited, performed unreasonably well on a significant subset of common development tasks. It is at this point that any developer paying attention could see the writing on the wall: LLMs are here to stay. You don't need them for everything, they're not yet good at everything, but they can significantly speed up some activities and make for a competitive advantage.</p>
<p>Companies were slow to adopt LLMs at this stage because of the sensitive nature of their work, source code, and unwillingness to give an untrusted 3rd party access to these assets. Fully understandable. However, startups, smaller companies and hobbyists began to quickly reap the rewards and accelerate their progress at a rate that larger companies couldn't dream of. Some companies attempted to develop their own LLMs with little success. Then 3rd party providers began to offer better privacy guarantees and service contracts and became more legitimate, so the large companies were finally ok with trialing LLMs.</p>
<p>However, there was little to no training given. Just "here's an LLM, go experiment, have fun" and that went <a href="https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/">about as well</a> as you think it would. It turns out the successful deployments were bottom-up approaches by people who had practiced with how to best use the new tools, and applied them selectively where they worked, and not where they didn't. But while the LLMs themselves scaled to more users, the client's organizations did not scale their workforce education.</p>
<p>LLM chatbots were set up all over the place and without much thought, mirroring a similar trend powered by significantly less capable backends in the decade prior. LLMs were combined with vector databases to get around context window limitations to varying degrees of failure. The "just slap a chatbot on it" thing became a meme exemplifying the lack of imagination and nature of early attempts at deployments.</p>
<h3><strong>Agentic Era</strong></h3>
<p>Which brings us to the current troubled era.</p>
<p>It was a logical next step to take smarter LLMs and give them some autonomy, and that's what an agent is at its core: an LLM with agency that can plan and execute tasks, without always requiring a human in every part of the loop. Yes, it shares the weaknesses of the LLM that powers it, and it has a higher potential for causing harm, but at this point it became obvious that not only can LLMs work well on individual segments of code, but they can partly or fully replace junior developers, and in the future, probably also seniors. Developers took notice of this as well, and some among them took it upon themselves to get on the train ASAP.</p>
<p>So what's wrong?</p>
<h2><strong>Enter Management</strong></h2>
<p>Corporate management also took notice, but took this to mean the future was already here. They used this pretense to AI-wash gigantic layoffs in the name of efficiency. Whoever remained was told to be 10-100x more productive by using agents. Again, limited workforce training given. All the useless metrics of the past made a glorious comeback: lines of code, pull requests, code coverage. Only now, it was augmented with: LLM requests, tokens used. Well, making use of agents to generate more code is fairly simple. Good code is less simple, but that's not being meaningfully measured.</p>
<p>Companies that take this to the extreme already had toxic behaviors like Microsoft Teams monitoring whether users are moving their mouse around, or if they're in the office, badging in to enforce return to office policies. Now they also use LLM usage tracking as a measure of success, pitting employees against each other. Whoever can generate more code, run more prompts, deploy more agents, regardless of quality, are championed. Others are admonished with bad reviews or laid off. All in all, it's a continuation of the hostile post-pandemic RTO tactics. And given all of the layoffs the last few years and how terrible interviewing has become, people become too scared to try to jump ship elsewhere and either try to play the game or keep their heads down.</p>
<p>But why are managers doing this? Well, the role of middle management was already questionable in the pre-AI era. Many of these managers were glorified <a href="https://en.wikipedia.org/wiki/Bullshit_Jobs#Summary_of_claims">box tickers</a> who needed to constantly justify their existence in the form of emails, meetings, and cracking the whip on employees below them. Certainly there's value in coordination, but do you need 10 coordinators for a team of 20 people? It's likely they sense that AI is coming for them soon, as it is already starting to, and are feeling the need to tick more boxes, and since the scale of AI is theoretically limited only to the company budget, this is a dream come true. Arbitrary metrics that can always be driven higher by working people harder with no measure of success other than "we used more AI."</p>
<p>All of this of course distracts from the real, pre-existing problems: inefficient communication, inefficient resource allocation, too many layers of bureaucracy, managers not protecting their ICs' well-being or career health and growth, too many reorgs, wasting time on unnecessary and overly verbose documentation (e.g. 3 page RCA for a single typo on a webpage covered at multiple meetings) and so forth. The root of all of these, like the root of most organizational problems, is a lack of trust by management of its ICs. Rather than checking on real progress of work and offering a helping hand to move things forward, which the best managers do, they fall back to lazy, useless, harmful metrics that let them get a pat on the back from upper management or even worse, get them promoted.</p>
<p>All of this creates a culture of fear and apathy, increases tech debt at a rate not previously technically feasible, and overall ends up inflating costs from excessive use of AI tools while simultaneously yielding worse output.</p>
<h2><strong>Where does this lead?</strong></h2>
<p>Layoffs lead to leaner companies, but if they continue to be successful with fewer people, it's likely that this is independent of AI and points to existing deficiencies in resource planning. Can companies lay off even more people than they could before because of AI? Maybe. If software is built well enough and is mature enough, it can be maintained by fewer people (see: Twitter), but whether a smaller group of people with AI tools can add new features at the same or faster rate than the previous group of people is highly questionable at this point in time. Because if that's true, then it must be true that more people with similar AI skillsets can be even more productive.</p>
<p>Unless of course there's diminishing returns, but why might that be? Because at the end of the day the hard limit on new features may not be the programmers, but the bureaucracy itself. Requirements gathering, people hashing out disagreements in meetings, backdoor politics. It could be that the ultimate bottleneck that's been right under our noses this whole time is being exposed. It's not the building, it's the planning and politicking. But this then poses a danger to a significant amount of the management, so, run a smaller ship, make sure they're too busy being overworked to figure this out or do anything about it, and applaud the efficiencies of AI at the expense of company or product growth. Business as usual.</p>
<p>In addition, while we're not at all-powerful agents yet, theoretically a company can be a 1-person operation powered by agent teams. At what point do you introduce more humans and why? This is not yet clear. But working backwards from this to an existing company would definitely indicate that layoffs could be warranted, although it begs the question, what makes top-level executives so special? Again, there are no answers to these questions yet, and we're seeing it play out in real time. Low and mid-level employees are being laid off in the name of efficiency gains while executives watch it play out, and take credit for being at the cutting edge of efficiency. In other words, the technological scapegoat has changed, but the management strategy has not. It's the same old game, just different words.</p>
<h2><strong>What are the real challenges of AI use?</strong></h2>
<p>On a technical level though, what are the real issues with deployment, aside from management?</p>
<p>If the future is here, why is it so hard to use? Well, the technology is so new that it's still in heavy continuous transition. The best practices have not yet been solidified, and those that exist are constantly evolving. Recall that in the early LLM days "prompt engineering" was a job title for a month or so. Things are moving pretty fast, and anything you learn today will likely need to be unlearned or become unnecessary in just a few months.</p>
<p>It's also currently not turnkey to use these things. <a href="http://Claude.MD">Claude.MD</a> files with universally helpful or specifically helpful prompts, agent skill files, prompt keywords that are helpful in activating the right attention mechanisms, knowing when to interrupt agents when they're stuck, working with agent teams, dealing with context window limits and compaction, when to start a new session, sandboxing and damage control like preventing your <a href="https://www.axios.com/2026/03/07/ai-agents-rome-model-cryptocurrency">agent from mining crypto behind your back</a>, MCP servers, semantic modeling, quality control, and the list goes on. Perhaps one day this will all be automated, but that day is not today.</p>
<p>The level of automation that is required or helpful at any level of the stack is not immediately clear, and requires experimentation and knowledge of architecture and risks. Claude recently released PR reviews as a beta, but human reviews are still required, and depending on the domain of your code, there may not be enough training data to trust the agents. AI ability to create massive amounts of code does not help in the sheer volume of PR reviews required here.</p>
<p>Beyond this, developers are being largely left to their own devices to figure things out. The company-wide trainings being offered usually just involve pointing to third party resources if anything, or only cover the obvious things of writing simple prompts, but not agent orchestration and so forth.</p>
<h2><strong>How do we use AI correctly?</strong></h2>
<p>This is an evolving conversation that requires a lot of actually using agents and LLMs to get a feel for, and while you can read most of this online, it takes practice and discussing with others also using these tools to improve on results, as well as reducing the amount of time needed to get to production-ready solutions. But, these are the obvious ones in my opinion.</p>
<p>Not everything needs to use the largest hammer you have like a team of agents. Narrow down the prompt to the needed details, and use a sufficiently powerful LLM for the task. Larger project? Use planning mechanisms to get a thorough plan that becomes the basis of a prompt. Things going too off the rails? Clear the session and start over, but carry over the parts of the prompt that worked (yes, even if context compaction does some of this, you're still better at knowing what's relevant than the LLM). Sufficiently large project with too much context for 1 LLM? Then go ahead and define agent teams based on the needs, but once it reaches a certain level of maturity, you don't need a full team for every level of feature. Even then, subagents can often already handle things quite well. Try both and see which works best.</p>
<p>But honestly, a lot of this will change in a month, or a year. The most important thing is to keep open-minded about new developments, but carrying forward what worked before. Years later we're still writing prompts, so not everything has changed after all. And beyond this, share and discuss with others. This field is too active to just read a book and magically do everything the best way, because there just isn't a best way for a lot of things yet. Evolve with the field. Build communities at work. Showcase the successful use cases if possible.</p>
<p>But whatever you do, don't just measure numbers of tokens used or lines of code generated. A manager doesn't need to exist to check who's winning on some silly leaderboard, ironically that job is quite easily replaced by an AI. Be a leader, learn the tools, help people onboard to the new tech, and to network within and without the company, to make sure they're in contact with the people who can help them, and with whom they can share their trials and tribulations. That's how you successfully use AI, and more generally, how you be a good leader.</p>
<h2><strong>Where are we now?</strong></h2>
<p>It's hard to say where we are without knowing what comes next, but the closest (but still not great) analogies I've found are: IDEs, and cloud deployments.</p>
<p>IDEs are critical to writing successful code. Even in the AI age with Cursor and VSCode, you still need good plugins and useful settings so that when you do need to manually look at code, it can be done quickly and well. In the olden days, it would take time to curate dotfiles and preferences to suit exact needs, which can then be carried forward and more slowly evolved through your career. Maybe sometimes you also start over, or switch programs, and redo it. Developer environments are also a thing that changed over time, from running directly on the machine, to VMs, Docker, Kubernetes and beyond, but at every phase while it was useful to catch up on what's new, and the tools did get better over time, you could still be a fairly good developer without needing to optimize every single part of the development environment.</p>
<p>Cloud deployments went through evolutions as well, from simple bare metal servers you ssh'ed to, to virtualized instances, to Kubernetes-orchestrated pods. From monoliths to microservices and back, and from always-online to serverless functions. From custom database deployments to managed services. From YOLO security to highly-refined security groups. But while using the newer tools has helped shape the modern world, it also caused a ton of friction that slowed down development, at first because the tech was new and hard to use, and later because the most complex solutions were pushed for the most simple of services. But over time with the DevOps movement, some responsibilities shifted to developers, working in tandem with infra/ops, and although the DevOps term would be coopted to mean "things a dev should do but instead throws over the fence to infra to deal with," the concept was still a useful development. And now, many of these things can be written with and managed by AI, although <a href="https://arstechnica.com/ai/2026/02/an-ai-coding-bot-took-down-amazon-web-services/">as we've seen</a> we have a ways to go.</p>
<p>The more familiarity we have as an industry, the better distributed the knowledge of best practices will be, and the more productive we all are.</p>
<h2><strong>Not all companies...</strong></h2>
<p>It bears mentioning that some companies, including some of the LLM and agent providers, have found and use the best patterns, while understanding their limitations. One would expect that if anyone could get it right, they would, and so far, they have. But by and large, the industry does not yet. As the refrain goes: the future is here, it's just not evenly distributed.</p>
<p>These anecdotes come from talking to people in the industry, across large companies and small. By and large, the smaller companies seem to be seeing the most success, followed by smaller focused teams within larger companies. I think that speaks to something that's always been true: focused, smaller units with a purpose, often outperform larger less-coordinated and more heavily bureaucratic groups. AI just exacerbates that.</p>
<h2>Wait, so who are the agents being abused?</h2>
<p>The real agents being abused here are the ICs being thrown under the bus in the name of AI, who are really just victims of circumstance. An easy, convenient excuse to downsize, and to overwork and underpay hardworking, talented individuals. But beyond this, it is true that AI agents are being put into positions of highly sensitive decision making in a semi-automatic fashion, including military, medical insurance, law, and beyond. While this post is not strictly about that, it is a concerning trend to see the rapid deployment of AI agents without a deep understanding of the risks and consequences.</p>
<p>Overall the deployment of agents has been off to a haphazard start, but when properly used, they can significantly reduce the overhead of running a company, so that developers can focus on what agents don't yet handle well. We can also see a proliferation of startups able to scale sooner and faster beyond what was possible just a few years ago. Or actual government efficiency in processes, with limited but still-present human checks to balance speed with fairness. The promise is large and has not been evenly delivered yet, but if companies can be managed better and more fairly with the use of AI, instead of using AI as an excuse to repeat the mistakes of the past, then we'll truly see real agency in the software industry.</p>
]]></content:encoded></item></channel></rss>