Posts

Why AI-Generated Code Has Security Gaps That Look Like Clean Code

Image
The most dangerous bugs in software are the ones that are invisible during review. Code that looks well-structured, compiles cleanly, and handles common inputs correctly can still contain serious security vulnerabilities that only appear when someone specifically looks for them. AI-generated code has a higher frequency of this pattern than human-written code, for specific and understandable reasons. Understanding why helps teams know what to look for and where to invest testing effort. This is not an argument against using AI coding assistants. It is an argument for understanding their specific failure modes so you can address them systematically. Why AI-Generated Code Looks Secure When It Isn't When an AI model generates code, it draws from patterns learned across a large corpus of training examples. The code it produces reflects what is common in that corpus. Common patterns tend to be structurally correct: they follow language conventions, use appropriate data types, and hand...

Why Most Web Apps Get Access Control Wrong From Day One

Image
Access control is almost never the first thing a team builds. It's the thing they add when someone asks "wait, can any user see anyone else's data?" The answer, in a distressing number of cases, is yes. This isn't a criticism of developer skill. It's a critique of the development process. Access control failures aren't random. They follow predictable patterns that emerge from how authorization is approached at the start of a project. The "We'll Add It Later" Pattern The most common access control mistake is treating authorization as something to retrofit rather than design upfront. The reasoning is understandable: in the early stages of a product, you have a small team, a limited user base, and authorization complexity that seems manageable with a simple admin flag. That changes as the product grows. More user types appear. Different users need different access. The simple admin flag becomes a set of overlapping conditionals. By the time ...

The UX Case for Better Data Tables in Enterprise Web Applications

Image
Enterprise software has a reputation for bad UX, and data tables are a significant part of why. Most internal tools and business applications rely heavily on tables to display, filter, and manage data, but most of those tables were built quickly, extended piecemeal, and never given a systematic UX review. Users adapt to the friction because they have to. The table becomes the part of the application people complain about but continue to use because there is no alternative. The business case for improving these tables is stronger than it appears on first consideration. The time users spend fighting poor table UX - exporting to Excel to do filtering the table should handle, repeating filter combinations manually because filter state does not persist, or working around missing bulk actions by processing rows one at a time - is direct productivity loss. Multiplied across a user base of 50 or 500 employees, it is a number worth calculating before deciding that a table UX improvement is a l...

Why INP Replaced FID and What That Means for Your Site's Performance Score

Image
In March 2024, Google replaced First Input Delay (FID) with Interaction to Next Paint (INP) as the responsiveness metric in the Core Web Vitals set. This wasn't a minor update to thresholds or methodology. It was a fundamental change in what Google measures when it evaluates whether a page is responsive to users. If your site's responsiveness score looked fine under FID but has declined since the transition to INP, you're not alone. INP is a stricter metric that captures failures FID was structurally unable to detect. Understanding why the change happened and what INP actually measures is necessary context for fixing it. Why FID Was Limited FID measured the delay between a user's first interaction with a page (usually a click or tap) and the point at which the browser began processing the event. It captured only the delay before processing started, not the time required to complete the processing or render the visual response. This meant FID could look good even on ...

Why ETL Pipeline Design Decisions Made Today Become Tomorrow's Technical Debt

Image
Data pipelines accumulate technical debt faster than almost any other category of software. The reason isn't complexity - most pipelines are structurally simple. It's that they're written to solve an immediate problem (move this data from here to there) without accounting for how requirements will change, how systems will evolve, and how the people who maintain the pipeline will need to understand and modify it months later. This piece is about the design decisions at the beginning of a pipeline project that determine whether it stays maintainable or becomes a liability. The "Quick Script" Problem Most pipeline technical debt starts with a script that wasn't supposed to last. A developer spends a day connecting two systems, it works, and it gets put in production. Six months later, the original developer is gone, the script has no documentation, it runs on a server that nobody is sure about, and changing anything requires reading the code and hoping you und...

What SaaS Vendor Contracts Miss and How to Negotiate Better Terms

Image
Signing a SaaS contract at the end of a vendor evaluation is the moment that most teams treat as a formality. The hard work of the evaluation is done. You have a vendor. Now you sign and move to implementation. The contract is not a formality. It's where the evaluation's conclusions get locked into binding obligations. Terms that look harmless at signature can become significant sources of operational friction or financial risk twelve months later. Understanding which terms to examine and which are worth pushing on protects the value of the evaluation you just ran. The Terms That Create Risk in Practice Auto-renewal clauses. Most SaaS contracts renew automatically unless canceled within a notice window. Notice windows of 30, 60, or 90 days before renewal are standard. A 90-day notice window on an annual contract means your cancellation decision must be made three months before the contract ends. If you miss the window, you're committed to another year regardless of whet...

How to Build Idempotent Webhook Event Processors

Image
Webhook delivery is at-least-once. If your endpoint returns a non-2xx response, the sender will retry. If there's a network timeout during delivery, the sender may retry even after a successful delivery that never reached you. This means your event processor must be idempotent: processing the same event twice must produce the same result as processing it once. Idempotency is not a nice-to-have. It's the fundamental contract that makes webhook integrations reliable. Without it, retry delivery creates duplicate records, double-charged accounts, and duplicate notifications. The Core Idempotency Pattern The standard pattern is event ID tracking. Every webhook sender includes a unique identifier in the payload. Before doing any processing work, check whether you've already processed this ID. If yes, return success and stop. If no, process the event and record the ID as handled. async def process_webhook_event(event_id: str, payload: dict) -> None: async with db.transa...