Working Offline: A Developer’s Past, Present, and Future

The Question That Started It

What happens to a software developer if the computer doesn’t have connectivity? Turn off the Wi-Fi. Turn off the Internet. Turn off the Intranet. The computer can only access its own resources. Can the developer still work? Does the application run?

I’ve been thinking about this question a lot lately, and it’s taken me on a journey through my own career — from a lesson learned twenty years ago to a present-day practice I’ve built into every project, and now to an uncertain future where AI is changing the equation.

Twenty Years Ago: When We Couldn’t Reproduce the System

A long time ago, I worked on a project where the client was in another country, very far from where I was. This was before we had the kind of internet speeds and availability we have today. We couldn’t just transfer big files over the network — it would take forever.

Their system was massive: large codebase, lots of data, deeply integrated with their infrastructure. It ran 24/7 operations. They couldn’t afford downtime, so they’d built a very stable platform on the backend.

Our job was to rewrite the front end. The system had good separation — the front end communicated with the backend using XML (this was before JSON). But here’s the problem: to run the front end, we needed the backend. And we couldn’t reproduce their backend in our development machines.

We tried. We failed.

After multiple attempts, we had to get creative. We built what I called a “fake backend” — essentially a simulator that would respond to the front end’s XML requests with canned responses. We recorded real interactions from their system and played them back during development.

It wasn’t perfect, but it worked. We could develop completely disconnected from their infrastructure. We could work on planes, at home without internet, anywhere. The fake backend ran locally, and we kept moving.

That experience taught me something I’ve carried forward ever since: dependency on external systems is a constraint I don’t want to live with during development.

Today: Building Independence Into Every Project

Fast forward to more recent projects. I’ve made it a practice to structure applications so I’m never stuck because dependencies are down or slow.

Take authentication, for example. On one project, we eventually integrated Azure B2C for production authentication. But I made sure not to be constrained by it during development. I didn’t want to be limited to only logging in through Azure B2C — it’s slower, it requires internet, and it adds friction to the development loop.

So we built a dual-mode system. During development, we use simple form authentication with a local user database seeded with well-known test users. Fast. No external calls. No waiting.

The same approach extended to authorization. We focused on permissions — protecting features, resources, and workflows — and stored those permissions in our own database. This meant we could test the entire authorization flow locally without ever touching Azure B2C.

For end-to-end tests, this was huge. We could write tests like: “Given I have permission to X, when I do Y, then I should observe Z.” We bypassed the login screen entirely and focused on the feature itself. Only a few tests actually exercised the authentication mechanism. Everything else assumed the user was already authenticated and authorized.

We did the same thing with Azure Service Bus. During development, we ran an in-memory message bus. Faster, no external dependencies. But we could flip a switch and use the real Azure Service Bus when we needed to verify that specific integration.

In production and QA environments, everything used the real services. But in development? Complete independence.

This approach has saved me more than once. During the freeze in Houston, I lost power for over a week. I kept working the entire time — running off my laptop battery, recharging with a gas generator when needed. When Azure went down due to a cybersecurity attack, I kept working. No internet? No problem.

Tomorrow: The AI Dependency

Now I’m thinking about the future, and it’s more complicated.

I use AI tools heavily in my development work. LLMs help me move faster, think through problems, generate code. But here’s the thing: I’m not running local models. I need internet connection for that speed.

If I’m in a situation where I don’t have internet — on a plane without Wi-Fi, in a place with spotty connectivity — I’m not going to be able to move as fast with certain kinds of work.

I could run a local LLM, but it would be quite a bit slower. And it would slow down my entire machine because of the resources it consumes. It’s like the difference between driving a car and walking. I can still get there by walking, but the level of effort is much greater and it’s definitely much slower.

Or I could go back to coding things by hand. Which, at this rate, feels like the difference between walking, riding a bicycle, driving a car, or flying.

The Bigger Question

This gets more interesting when I think about where we’re heading. Imagine AI agents that automatically identify issues in production systems, troubleshoot them, patch the fix, and deploy — all automated, all fast. That’s a huge reliance on connectivity.

If the servers are down, if the electricity is out, are we ready to go back to the basics? To the manual way of doing things?

With the speed that everything is moving, it seems like we need to make sure we don’t lose track of the fundamentals. We need to be prepared to handle these dependencies. We need to ask: if these dependencies aren’t there, can we still do it? And if we can, can we still do it in a timely manner?

What I’m Noticing

I understand that most software these days includes features integrated with other systems. Networking capability is necessary. But is that true for every single feature? I don’t think so.

To me, it’s important to be able to do some work while completely disconnected. It’s a design choice. It’s about resilience. It’s about not being blocked when the world around you goes down.

I’ve built this into my practice over the years, and it’s served me well. But as we move into a future where AI is increasingly part of the development workflow, I’m watching to see how this principle holds up.

It will be interesting to see how that’s going to go.

  1. Leave a comment

Leave a Reply

Discover more from Claudio Lassala's Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading