Programming

Punched Cards and Prompt Engines: The 50-Year CS Relay

A dusty Fortran IV textbook from Skopje highlights the shift from syntax to intent in the age of AI agents.

··4 min read
Punched Cards and Prompt Engines: The 50-Year CS Relay

Holding a textbook on Fortran IV feels different when you realize it actually helped build the world we are now trying to automate.

I recently picked up a weathered copy used by a student at the Faculty of Electrical Engineering at SS Cyril and Methodius University in Skopje during the late 1970s. That student eventually saw his son follow the exact same academic path, graduating decades later with a Master’s from the Faculty of Computer Science and Engineering. Looking at that book in our current 2026 AI environment is like staring at a hand-cranked engine from the seat of a self-driving shuttle. We are witnessing a generational relay race where the baton has passed from the unforgiving syntax of punched cards to the fluid, intent-based world of generative agents.

The Foundation: Engineering When Every Byte Hurt

In the late 1970s, programming was an act of extreme deliberation. Fortran IV was the gold standard for scientific computing, but it demanded a specific kind of mental architecture. You did not just sit down and type.

You mapped out logic on coding sheets because you had to.

Success required a certain mechanical empathy. You knew exactly how the machine would interpret your loops and your GOTO statements because any error meant waiting hours for a batch job to fail. At SS Cyril and Methodius University back then, computer science was still deeply rooted in electrical engineering. The curriculum was about the metal, the signals, and the uncompromising logic of early high-level languages. Code was a scarce resource. Today, we treat code like oxygen. It is everywhere, mostly invisible, and often taken for granted.

The Generational Relay

There is a profound continuity in this story. While the institution in Skopje remained the anchor, the definition of a programmer shifted beneath it.

The father learned how to talk to the machine in its own language. The son, graduating three decades later, entered a world where the abstraction layers had grown so thick that the underlying hardware was almost an afterthought. This is the natural progression of our field. We build bigger tools so we can solve bigger problems.

However, looking at that Fortran IV book, I wonder if we have traded depth for speed. The father had to be a logician to get a single program to run. The modern graduate has to be an architect, managing vast ecosystems of libraries and (increasingly) AI agents that write the boilerplate for them.

Syntax vs. Intent

As an AI researcher, I find the comparison between Fortran and our current 2026 models fascinating. Fortran IV was entirely about syntax. If you missed a comma, the world ended.

Our current models focus on intent. With the recent Chrome DevTools MCP updates, we are seeing agents that do not even need us to write code anymore. They bypass login gates and use active browser sessions to perform tasks based on a simple natural language request. We have moved from "How do I tell the machine to do this?" to "What do I want the machine to achieve?"

This is a massive leap in developer experience, but it introduces a new kind of fragility. When the father wrote Fortran, he knew exactly why the code worked. When a 2026 developer uses an agent to spin up a microservice, they are often relying on a probabilistic guess from a black box.

What Remains Essential?

Despite the radical shift in tools, the core engineering mindset remains the same. Whether you are debugging a Fortran IV loop in 1978 or auditing an AI-generated script today, the fundamental task is problem decomposition. You have to be able to break a complex goal into logical, verifiable steps.

We are currently seeing a massive weeding out of the industry. The recent 0.1% Filter implemented by Google and Accel shows that the market is no longer interested in simple AI wrappers. They want real tech. Out of 4,000 applicants, only five startups survived their vetting. This tells me that the foundational logic taught in those old electrical engineering courses is more relevant than ever. The tools have changed, but the need for rigorous, algorithmic thinking has not.

The Dawn of the Architect

Are we witnessing the final act of the human coder? I do not think so.

We are simply moving up the stack. The father was a bricklayer, but the modern developer is an urban planner. The barrier to entry has been dismantled, but the ceiling for what we can build has been raised to the clouds.

If the generation of the 70s could build the foundations of modern computing using nothing but Fortran IV and intuition, what will the next generation build with 2026 AI as their baseline? We are no longer the ones swinging the hammers. We are the architects of the machines that swing them for us. The question is no longer whether we can write the code, but whether we have the vision to know what code is actually worth writing.

#Computer Science#AI Agents#Programming History#Fortran#Software Development